Feb 19 19:18:51 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 19:18:51 crc restorecon[4675]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:51 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 19:18:52 crc restorecon[4675]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 19:18:52 crc kubenswrapper[4787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 19:18:52 crc kubenswrapper[4787]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 19:18:52 crc kubenswrapper[4787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 19:18:52 crc kubenswrapper[4787]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 19:18:52 crc kubenswrapper[4787]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 19:18:52 crc kubenswrapper[4787]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.650560 4787 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656029 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656067 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656077 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656085 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656093 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656101 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656113 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656124 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656134 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656144 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656153 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656164 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656173 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656182 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656190 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656198 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656207 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656215 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656224 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656231 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656240 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656248 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656260 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656269 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656277 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656286 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656311 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656320 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656327 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656335 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656342 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656350 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656358 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656366 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656374 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656381 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656389 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656398 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656407 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656415 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656422 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656432 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656442 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656454 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656462 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656491 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656498 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656506 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656515 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656524 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656534 4787 feature_gate.go:330] unrecognized feature gate: Example Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656543 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656552 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656560 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656567 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656576 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656584 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656635 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656644 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656652 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656660 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656668 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656677 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656686 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656694 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656701 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656709 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656716 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656723 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656732 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.656739 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657732 4787 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657758 4787 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657772 4787 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657783 4787 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657796 4787 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657806 4787 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657818 4787 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657829 4787 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657839 4787 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657849 4787 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657858 4787 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657868 4787 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657877 4787 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657887 4787 flags.go:64] FLAG: --cgroup-root="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657896 4787 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657905 4787 flags.go:64] FLAG: --client-ca-file="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657914 4787 flags.go:64] FLAG: --cloud-config="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657922 4787 flags.go:64] FLAG: --cloud-provider="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657931 4787 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657945 4787 flags.go:64] FLAG: --cluster-domain="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657954 4787 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657963 4787 flags.go:64] FLAG: --config-dir="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657972 4787 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657981 4787 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.657993 4787 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658002 4787 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658011 4787 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658022 4787 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658031 4787 flags.go:64] FLAG: --contention-profiling="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658040 4787 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658049 4787 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658059 4787 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658068 4787 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658078 4787 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658088 4787 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658096 4787 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658108 4787 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658117 4787 flags.go:64] FLAG: --enable-server="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658126 4787 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658137 4787 flags.go:64] FLAG: --event-burst="100" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658146 4787 flags.go:64] FLAG: --event-qps="50" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658155 4787 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658165 4787 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658173 4787 flags.go:64] FLAG: --eviction-hard="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658184 4787 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658194 4787 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658202 4787 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658211 4787 flags.go:64] FLAG: --eviction-soft="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658220 4787 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658229 4787 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658238 4787 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658247 4787 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658255 4787 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658264 4787 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658273 4787 flags.go:64] FLAG: --feature-gates="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658284 4787 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658293 4787 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658303 4787 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658312 4787 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658321 4787 flags.go:64] FLAG: --healthz-port="10248" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658330 4787 flags.go:64] FLAG: --help="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658339 4787 flags.go:64] FLAG: --hostname-override="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658347 4787 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658358 4787 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658367 4787 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658376 4787 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658385 4787 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658394 4787 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658403 4787 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658412 4787 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658421 4787 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658430 4787 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658439 4787 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658447 4787 flags.go:64] FLAG: --kube-reserved="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658456 4787 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658465 4787 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658475 4787 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658484 4787 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658492 4787 flags.go:64] FLAG: --lock-file="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658501 4787 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658510 4787 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658520 4787 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658533 4787 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658541 4787 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658550 4787 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658559 4787 flags.go:64] FLAG: --logging-format="text" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658568 4787 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658578 4787 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658588 4787 flags.go:64] FLAG: --manifest-url="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658598 4787 flags.go:64] FLAG: --manifest-url-header="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658634 4787 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658643 4787 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658655 4787 flags.go:64] FLAG: --max-pods="110" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658665 4787 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658675 4787 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658684 4787 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658694 4787 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658703 4787 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658712 4787 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658722 4787 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658744 4787 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658754 4787 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658763 4787 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658772 4787 flags.go:64] FLAG: --pod-cidr="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658781 4787 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658794 4787 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658803 4787 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658820 4787 flags.go:64] FLAG: --pods-per-core="0" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658831 4787 flags.go:64] FLAG: --port="10250" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658843 4787 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658854 4787 flags.go:64] FLAG: --provider-id="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658866 4787 flags.go:64] FLAG: --qos-reserved="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658876 4787 flags.go:64] FLAG: --read-only-port="10255" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658885 4787 flags.go:64] FLAG: --register-node="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658894 4787 flags.go:64] FLAG: --register-schedulable="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658904 4787 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658920 4787 flags.go:64] FLAG: --registry-burst="10" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658932 4787 flags.go:64] FLAG: --registry-qps="5" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658943 4787 flags.go:64] FLAG: --reserved-cpus="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658955 4787 flags.go:64] FLAG: --reserved-memory="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658969 4787 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658981 4787 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.658993 4787 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659003 4787 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659014 4787 flags.go:64] FLAG: --runonce="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659025 4787 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659036 4787 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659047 4787 flags.go:64] FLAG: --seccomp-default="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659059 4787 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659070 4787 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659081 4787 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659093 4787 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659105 4787 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659117 4787 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659128 4787 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659141 4787 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659152 4787 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659163 4787 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659175 4787 flags.go:64] FLAG: --system-cgroups="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659191 4787 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659209 4787 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659220 4787 flags.go:64] FLAG: --tls-cert-file="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659231 4787 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659246 4787 flags.go:64] FLAG: --tls-min-version="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659257 4787 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659269 4787 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659280 4787 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659290 4787 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659299 4787 flags.go:64] FLAG: --v="2" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659314 4787 flags.go:64] FLAG: --version="false" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659328 4787 flags.go:64] FLAG: --vmodule="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659345 4787 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.659357 4787 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659595 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659643 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659655 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659665 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659676 4787 feature_gate.go:330] unrecognized feature gate: Example Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659685 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659694 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659703 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659711 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659719 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659726 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659734 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659742 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659752 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659763 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659772 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659780 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659788 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659802 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659811 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659820 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659828 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659837 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659844 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659852 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659860 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659868 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659876 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659884 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659892 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659899 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659910 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659921 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659929 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659937 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659945 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659952 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659961 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659969 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659979 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659988 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.659997 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660006 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660014 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660021 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660029 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660037 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660047 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660057 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660066 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660077 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660084 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660092 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660100 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660108 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660116 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660124 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660132 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660141 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660149 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660157 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660165 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660174 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660182 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660190 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660198 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660205 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660213 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660221 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660229 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.660236 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.660250 4787 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.671468 4787 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.671511 4787 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671650 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671665 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671673 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671680 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671686 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671693 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671699 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671705 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671712 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671719 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671726 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671735 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671744 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671751 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671760 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671767 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671774 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671780 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671787 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671793 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671799 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671806 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671811 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671818 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671825 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671831 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671837 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671844 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671852 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671859 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671865 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671872 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671877 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671884 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671893 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671901 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671909 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671915 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671922 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671930 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671937 4787 feature_gate.go:330] unrecognized feature gate: Example Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671943 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671950 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671956 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671965 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671976 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671984 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671991 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.671999 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672005 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672015 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672024 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672032 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672039 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672046 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672053 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672060 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672066 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672073 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672080 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672088 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672094 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672101 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672109 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672116 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672121 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672127 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672133 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672138 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672143 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672148 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.672158 4787 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672354 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672365 4787 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672370 4787 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672376 4787 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672382 4787 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672387 4787 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672394 4787 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672402 4787 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672409 4787 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672415 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672420 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672425 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672431 4787 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672438 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672444 4787 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672450 4787 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672455 4787 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672460 4787 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672465 4787 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672471 4787 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672476 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672481 4787 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672486 4787 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672491 4787 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672498 4787 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672503 4787 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672508 4787 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672513 4787 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672518 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672523 4787 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672529 4787 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672534 4787 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672539 4787 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672545 4787 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672550 4787 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672555 4787 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672560 4787 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672565 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672570 4787 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672576 4787 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672581 4787 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672586 4787 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672593 4787 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672628 4787 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672635 4787 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672642 4787 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672647 4787 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672653 4787 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672659 4787 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672665 4787 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672675 4787 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672682 4787 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672688 4787 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672693 4787 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672699 4787 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672704 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672712 4787 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672719 4787 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672724 4787 feature_gate.go:330] unrecognized feature gate: Example Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672730 4787 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672736 4787 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672742 4787 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672749 4787 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672755 4787 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672761 4787 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672767 4787 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672774 4787 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672780 4787 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672787 4787 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672793 4787 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.672799 4787 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.672809 4787 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.673061 4787 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.684692 4787 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.684802 4787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.686045 4787 server.go:997] "Starting client certificate rotation" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.686073 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.686343 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 23:57:17.42388276 +0000 UTC Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.686528 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.713414 4787 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.717045 4787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.717747 4787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.738228 4787 log.go:25] "Validated CRI v1 runtime API" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.781280 4787 log.go:25] "Validated CRI v1 image API" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.783259 4787 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.789560 4787 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-19-14-04-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.789597 4787 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.809032 4787 manager.go:217] Machine: {Timestamp:2026-02-19 19:18:52.805265893 +0000 UTC m=+0.595931855 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b30ba7af-b2e2-44e0-b259-a04a3d082dd3 BootID:6db17b6c-86dc-4c6b-a0c1-ee45005d3057 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fc:fb:a0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fc:fb:a0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:92:55:ac Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d5:bb:ca Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:10:f1:1c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c9:7f:12 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:05:58:27:b5:25 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:06:32:55:7b:fc:2b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.809313 4787 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.809500 4787 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.811938 4787 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.812238 4787 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.812285 4787 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.812562 4787 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.812575 4787 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.813303 4787 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.813345 4787 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.814840 4787 state_mem.go:36] "Initialized new in-memory state store" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.814964 4787 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.819549 4787 kubelet.go:418] "Attempting to sync node with API server" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.819574 4787 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.819600 4787 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.819629 4787 kubelet.go:324] "Adding apiserver pod source" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.819642 4787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.825273 4787 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.827129 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.827147 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.827245 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.827254 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.827920 4787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.829772 4787 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831742 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831785 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831802 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831816 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831839 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831855 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831872 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831894 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831910 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831924 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831963 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.831978 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.833826 4787 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.834791 4787 server.go:1280] "Started kubelet" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.835181 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.835827 4787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.835824 4787 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.836494 4787 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 19:18:52 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.837391 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.837428 4787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.837693 4787 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.837777 4787 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.837786 4787 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.837918 4787 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.837939 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.838031 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:10:12.508943937 +0000 UTC Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.838293 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.838332 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.839957 4787 factory.go:55] Registering systemd factory Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.839983 4787 factory.go:221] Registration of the systemd container factory successfully Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.840634 4787 factory.go:153] Registering CRI-O factory Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.840656 4787 factory.go:221] Registration of the crio container factory successfully Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.840808 4787 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.840849 4787 factory.go:103] Registering Raw factory Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.840888 4787 manager.go:1196] Started watching for new ooms in manager Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.844145 4787 manager.go:319] Starting recovery of all containers Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.849227 4787 server.go:460] "Adding debug handlers to kubelet server" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.845940 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895bbff1e034b8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 19:18:52.834745231 +0000 UTC m=+0.625411223,LastTimestamp:2026-02-19 19:18:52.834745231 +0000 UTC m=+0.625411223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.855861 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.855946 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.855961 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.855974 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.855989 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856001 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856013 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856025 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856039 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856051 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856066 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856080 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856092 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856129 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856142 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856159 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856174 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856188 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856199 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856214 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856227 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856242 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856254 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856265 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856279 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856292 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856329 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856346 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856365 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856381 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856396 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856409 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856429 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856440 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856454 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856467 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856480 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856493 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856508 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856520 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856533 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856547 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856561 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856575 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856586 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856601 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856658 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856672 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856686 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856700 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856711 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856722 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856738 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856750 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856766 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856776 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856789 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856800 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856810 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856821 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856831 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856841 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856891 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856907 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856917 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856927 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856937 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856948 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856959 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856969 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856980 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.856990 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857061 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857078 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857091 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857105 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857117 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857130 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857143 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857155 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857169 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857182 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857197 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857213 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857225 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857238 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857255 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857267 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857279 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857294 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857306 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857319 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857355 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857369 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857381 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857394 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857411 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857423 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857436 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857447 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857460 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857472 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857486 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857498 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857517 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857531 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857546 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857558 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857572 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857585 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857597 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857629 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857643 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857656 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857671 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857705 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857717 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857727 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857742 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857755 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857768 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857781 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857792 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857804 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857817 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857829 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857842 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857857 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857872 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857886 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857899 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857910 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857922 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857936 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857949 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857961 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857977 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.857990 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858003 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858016 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858029 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858042 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858053 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858065 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858079 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858090 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858102 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858116 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858129 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858141 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858155 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858167 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.858179 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862037 4787 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862169 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862265 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862350 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862470 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862550 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862653 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862766 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862856 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.862957 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863639 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863678 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863692 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863704 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863720 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863796 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863814 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863830 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863843 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863856 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863898 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863913 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863928 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863967 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863983 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.863998 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864027 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864043 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864058 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864074 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864089 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864112 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864123 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864133 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864142 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864154 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864164 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864193 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864204 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864266 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864278 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864288 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864419 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864430 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864439 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864451 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864511 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864551 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864566 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864575 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864584 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864598 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864623 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864632 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864642 4787 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864669 4787 reconstruct.go:97] "Volume reconstruction finished" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.864677 4787 reconciler.go:26] "Reconciler: start to sync state" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.867054 4787 manager.go:324] Recovery completed Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.876064 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.878832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.879025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.879134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.881707 4787 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.881733 4787 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.881812 4787 state_mem.go:36] "Initialized new in-memory state store" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.888359 4787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.890522 4787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.890566 4787 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.890600 4787 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.891090 4787 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 19:18:52 crc kubenswrapper[4787]: W0219 19:18:52.891307 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.891370 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.898681 4787 policy_none.go:49] "None policy: Start" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.900329 4787 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.900355 4787 state_mem.go:35] "Initializing new in-memory state store" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.938006 4787 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.946025 4787 manager.go:334] "Starting Device Plugin manager" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.946084 4787 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.946100 4787 server.go:79] "Starting device plugin registration server" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.946723 4787 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.946755 4787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.947058 4787 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.947231 4787 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.947247 4787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 19:18:52 crc kubenswrapper[4787]: E0219 19:18:52.955428 4787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.991330 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.991427 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.992908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.992944 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.992956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.993060 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.993213 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.993277 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.995077 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.995118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.995138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.995330 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.996054 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.996142 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.997879 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998083 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998137 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998959 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.998969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.999091 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.999281 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.999334 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.999831 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.999850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:52 crc kubenswrapper[4787]: I0219 19:18:52.999858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:52.999961 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:52.999979 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.000547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.000576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.000587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.000871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.000894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.000903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.038551 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.046915 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.047842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.047875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.047886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.047910 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.048370 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066548 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066580 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066626 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066650 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066686 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066877 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066927 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066950 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066976 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.066998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.067016 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.067035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.067078 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.067108 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168095 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168264 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168304 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168326 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168447 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168487 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168493 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168518 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168541 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168558 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168640 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168668 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168693 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168755 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168780 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.168872 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.249210 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.250205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.250240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.250248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.250270 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.250769 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.321791 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.335344 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.352068 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.367235 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.372078 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-46754bb65a6b89a03089bd373912b96c92a2dc27c10cf2501a629989ca80c138 WatchSource:0}: Error finding container 46754bb65a6b89a03089bd373912b96c92a2dc27c10cf2501a629989ca80c138: Status 404 returned error can't find the container with id 46754bb65a6b89a03089bd373912b96c92a2dc27c10cf2501a629989ca80c138 Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.372713 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-71001bef63d8259dafb03fd218db10933840d42593af83a7a533d594afae68a2 WatchSource:0}: Error finding container 71001bef63d8259dafb03fd218db10933840d42593af83a7a533d594afae68a2: Status 404 returned error can't find the container with id 71001bef63d8259dafb03fd218db10933840d42593af83a7a533d594afae68a2 Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.372834 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.382316 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4cdee72a8ee3e54a960ce1354146d472a8b3fb08c9065a0ac2f548869c377b94 WatchSource:0}: Error finding container 4cdee72a8ee3e54a960ce1354146d472a8b3fb08c9065a0ac2f548869c377b94: Status 404 returned error can't find the container with id 4cdee72a8ee3e54a960ce1354146d472a8b3fb08c9065a0ac2f548869c377b94 Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.388020 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0a08a7ecaeaf39f6271e17b593d9ace92075494e4d582854cffbf0c8334fc27e WatchSource:0}: Error finding container 0a08a7ecaeaf39f6271e17b593d9ace92075494e4d582854cffbf0c8334fc27e: Status 404 returned error can't find the container with id 0a08a7ecaeaf39f6271e17b593d9ace92075494e4d582854cffbf0c8334fc27e Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.390110 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c49607b092a996bde555a9faef6233e817de0d4e19517cb6ba9afa14c822170f WatchSource:0}: Error finding container c49607b092a996bde555a9faef6233e817de0d4e19517cb6ba9afa14c822170f: Status 404 returned error can't find the container with id c49607b092a996bde555a9faef6233e817de0d4e19517cb6ba9afa14c822170f Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.439537 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.651423 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.652778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.652828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.652840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.652872 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.653344 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.836330 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.838335 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:23:37.585599491 +0000 UTC Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.860310 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.860385 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:53 crc kubenswrapper[4787]: W0219 19:18:53.869279 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:53 crc kubenswrapper[4787]: E0219 19:18:53.869325 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.894470 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"71001bef63d8259dafb03fd218db10933840d42593af83a7a533d594afae68a2"} Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.896021 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"46754bb65a6b89a03089bd373912b96c92a2dc27c10cf2501a629989ca80c138"} Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.897266 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c49607b092a996bde555a9faef6233e817de0d4e19517cb6ba9afa14c822170f"} Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.902851 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0a08a7ecaeaf39f6271e17b593d9ace92075494e4d582854cffbf0c8334fc27e"} Feb 19 19:18:53 crc kubenswrapper[4787]: I0219 19:18:53.904672 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4cdee72a8ee3e54a960ce1354146d472a8b3fb08c9065a0ac2f548869c377b94"} Feb 19 19:18:54 crc kubenswrapper[4787]: W0219 19:18:54.119125 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:54 crc kubenswrapper[4787]: E0219 19:18:54.119857 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:54 crc kubenswrapper[4787]: E0219 19:18:54.241326 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Feb 19 19:18:54 crc kubenswrapper[4787]: W0219 19:18:54.438894 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:54 crc kubenswrapper[4787]: E0219 19:18:54.438963 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.453892 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.456220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.456275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.456286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.456320 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:18:54 crc kubenswrapper[4787]: E0219 19:18:54.456917 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.727691 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 19:18:54 crc kubenswrapper[4787]: E0219 19:18:54.729002 4787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.836785 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.838985 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:59:08.465167419 +0000 UTC Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.911377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.911453 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.912174 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.912266 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.912290 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.913940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.914003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.914032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.914945 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838" exitCode=0 Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.915006 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.915080 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.916414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.916445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.916455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.918581 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.918638 4787 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8" exitCode=0 Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.918674 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.918787 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.921328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.921383 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.921398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.923963 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.924020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.924037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.924510 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4491260bdc465ce64ed848b886560268bf3ad045d576976e61252baab0cf2fe3" exitCode=0 Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.924586 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4491260bdc465ce64ed848b886560268bf3ad045d576976e61252baab0cf2fe3"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.924666 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.926118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.926174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.926192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.930061 4787 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3" exitCode=0 Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.930117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3"} Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.930248 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.931218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.931265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:54 crc kubenswrapper[4787]: I0219 19:18:54.931282 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:55 crc kubenswrapper[4787]: W0219 19:18:55.729295 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:55 crc kubenswrapper[4787]: E0219 19:18:55.729422 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.836382 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.839559 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:24:50.150204695 +0000 UTC Feb 19 19:18:55 crc kubenswrapper[4787]: E0219 19:18:55.842946 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.934455 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.934506 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.934520 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.935766 4787 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7" exitCode=0 Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.935834 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.937076 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d1e554eb3a53ec57a484b038667676c38dab588131aee6ca0e5aaff447cadcec"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.937110 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.937901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.937929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.937938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.939639 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.939685 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.939632 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.939806 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.939820 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96"} Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.940567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.940644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.940662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.941110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.941136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:55 crc kubenswrapper[4787]: I0219 19:18:55.941147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.057381 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.058660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.058692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.058701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.058723 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:18:56 crc kubenswrapper[4787]: E0219 19:18:56.059143 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 19 19:18:56 crc kubenswrapper[4787]: W0219 19:18:56.827912 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:56 crc kubenswrapper[4787]: E0219 19:18:56.828022 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.835952 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.840130 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:41:52.902781906 +0000 UTC Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.943053 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945437 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b33ea1a99fbaffbd8fc53911933fbff31a803267162a604316122725dba1d002" exitCode=255 Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945559 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945626 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945638 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945544 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b33ea1a99fbaffbd8fc53911933fbff31a803267162a604316122725dba1d002"} Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde"} Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945849 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.945959 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.946749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.946790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.946806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947366 4787 scope.go:117] "RemoveContainer" containerID="b33ea1a99fbaffbd8fc53911933fbff31a803267162a604316122725dba1d002" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947413 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:56 crc kubenswrapper[4787]: I0219 19:18:56.947449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.179290 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.179449 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.180493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.180520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.180529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.457365 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.841212 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:07:37.455249465 +0000 UTC Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.950023 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.951734 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9"} Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.951820 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.952737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.952770 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.952785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.953792 4787 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f" exitCode=0 Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.953845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f"} Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.953945 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.957125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.957166 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:57 crc kubenswrapper[4787]: I0219 19:18:57.957177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.837172 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.837436 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.838943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.839015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.839029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.842118 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:12:51.8045155 +0000 UTC Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.962813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414"} Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.962892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063"} Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.962908 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.962965 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.962913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d"} Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.963098 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.963121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd"} Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.963160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a"} Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.963830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.963857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.963895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.964132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.964170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:58 crc kubenswrapper[4787]: I0219 19:18:58.964179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.000473 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.211694 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.211941 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.213477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.213527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.213537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.259792 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.260952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.260993 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.261005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.261104 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.842897 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:31:54.549705397 +0000 UTC Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.962149 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.965226 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.965296 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.965444 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.966492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.966515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.966564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.966593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.966535 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:18:59 crc kubenswrapper[4787]: I0219 19:18:59.966645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.185385 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.185729 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.187132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.187181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.187195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.843643 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:51:12.554572503 +0000 UTC Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.928238 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.969956 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.971069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.971118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:00 crc kubenswrapper[4787]: I0219 19:19:00.971131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.404929 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.405197 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.406783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.406835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.406877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.844242 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:26:38.515781518 +0000 UTC Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.990957 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.991148 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.992598 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.992679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.992693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:01 crc kubenswrapper[4787]: I0219 19:19:01.996789 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:19:02 crc kubenswrapper[4787]: I0219 19:19:02.845004 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:42:06.09065155 +0000 UTC Feb 19 19:19:02 crc kubenswrapper[4787]: E0219 19:19:02.955586 4787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 19:19:02 crc kubenswrapper[4787]: I0219 19:19:02.975816 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:02 crc kubenswrapper[4787]: I0219 19:19:02.976672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:02 crc kubenswrapper[4787]: I0219 19:19:02.976735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:02 crc kubenswrapper[4787]: I0219 19:19:02.976752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:03 crc kubenswrapper[4787]: I0219 19:19:03.185968 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 19:19:03 crc kubenswrapper[4787]: I0219 19:19:03.186086 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 19:19:03 crc kubenswrapper[4787]: I0219 19:19:03.845484 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:43:23.905339228 +0000 UTC Feb 19 19:19:04 crc kubenswrapper[4787]: I0219 19:19:04.846446 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:56:40.919740914 +0000 UTC Feb 19 19:19:05 crc kubenswrapper[4787]: I0219 19:19:05.076087 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 19:19:05 crc kubenswrapper[4787]: I0219 19:19:05.076359 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:05 crc kubenswrapper[4787]: I0219 19:19:05.078220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:05 crc kubenswrapper[4787]: I0219 19:19:05.078264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:05 crc kubenswrapper[4787]: I0219 19:19:05.078279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:05 crc kubenswrapper[4787]: I0219 19:19:05.846981 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:02:14.097975357 +0000 UTC Feb 19 19:19:06 crc kubenswrapper[4787]: I0219 19:19:06.847693 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:45:11.681838471 +0000 UTC Feb 19 19:19:06 crc kubenswrapper[4787]: W0219 19:19:06.853044 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 19:19:06 crc kubenswrapper[4787]: I0219 19:19:06.853142 4787 trace.go:236] Trace[1690524039]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 19:18:56.852) (total time: 10000ms): Feb 19 19:19:06 crc kubenswrapper[4787]: Trace[1690524039]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:19:06.853) Feb 19 19:19:06 crc kubenswrapper[4787]: Trace[1690524039]: [10.000848862s] [10.000848862s] END Feb 19 19:19:06 crc kubenswrapper[4787]: E0219 19:19:06.853166 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 19:19:06 crc kubenswrapper[4787]: W0219 19:19:06.985021 4787 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 19:19:06 crc kubenswrapper[4787]: I0219 19:19:06.985136 4787 trace.go:236] Trace[964937879]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 19:18:56.983) (total time: 10001ms): Feb 19 19:19:06 crc kubenswrapper[4787]: Trace[964937879]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:19:06.985) Feb 19 19:19:06 crc kubenswrapper[4787]: Trace[964937879]: [10.001837288s] [10.001837288s] END Feb 19 19:19:06 crc kubenswrapper[4787]: E0219 19:19:06.985165 4787 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 19:19:07 crc kubenswrapper[4787]: I0219 19:19:07.457451 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 19:19:07 crc kubenswrapper[4787]: I0219 19:19:07.457557 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 19:19:07 crc kubenswrapper[4787]: I0219 19:19:07.837506 4787 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 19:19:07 crc kubenswrapper[4787]: I0219 19:19:07.848802 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:11:23.133869243 +0000 UTC Feb 19 19:19:08 crc kubenswrapper[4787]: I0219 19:19:08.124875 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 19:19:08 crc kubenswrapper[4787]: I0219 19:19:08.124964 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 19:19:08 crc kubenswrapper[4787]: I0219 19:19:08.849890 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:05:06.726252365 +0000 UTC Feb 19 19:19:09 crc kubenswrapper[4787]: I0219 19:19:09.216889 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:19:09 crc kubenswrapper[4787]: I0219 19:19:09.217063 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:09 crc kubenswrapper[4787]: I0219 19:19:09.218387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:09 crc kubenswrapper[4787]: I0219 19:19:09.218444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:09 crc kubenswrapper[4787]: I0219 19:19:09.218460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:09 crc kubenswrapper[4787]: I0219 19:19:09.850076 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:49:31.255785718 +0000 UTC Feb 19 19:19:10 crc kubenswrapper[4787]: I0219 19:19:10.850827 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:41:53.71542453 +0000 UTC Feb 19 19:19:11 crc kubenswrapper[4787]: I0219 19:19:11.851456 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:33:33.866781968 +0000 UTC Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.276668 4787 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.463091 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.463235 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.464776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.464826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.464840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.469585 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.614133 4787 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 19:19:12 crc kubenswrapper[4787]: I0219 19:19:12.852539 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:18:29.148039565 +0000 UTC Feb 19 19:19:12 crc kubenswrapper[4787]: E0219 19:19:12.955815 4787 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.003063 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.003118 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.004114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.004152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.004163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.108890 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.110945 4787 trace.go:236] Trace[877703857]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 19:18:59.236) (total time: 13873ms): Feb 19 19:19:13 crc kubenswrapper[4787]: Trace[877703857]: ---"Objects listed" error: 13873ms (19:19:13.110) Feb 19 19:19:13 crc kubenswrapper[4787]: Trace[877703857]: [13.873900959s] [13.873900959s] END Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.111113 4787 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.112116 4787 trace.go:236] Trace[748077376]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 19:19:01.598) (total time: 11513ms): Feb 19 19:19:13 crc kubenswrapper[4787]: Trace[748077376]: ---"Objects listed" error: 11513ms (19:19:13.111) Feb 19 19:19:13 crc kubenswrapper[4787]: Trace[748077376]: [11.513987085s] [11.513987085s] END Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.112160 4787 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.112258 4787 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.113417 4787 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.136838 4787 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.153203 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47536->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.153204 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47550->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.153338 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47550->192.168.126.11:17697: read: connection reset by peer" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.153259 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47536->192.168.126.11:17697: read: connection reset by peer" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.153671 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.153706 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.186497 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.186564 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.519273 4787 csr.go:261] certificate signing request csr-hzkbd is approved, waiting to be issued Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.560943 4787 csr.go:257] certificate signing request csr-hzkbd is issued Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.831691 4787 apiserver.go:52] "Watching apiserver" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.834083 4787 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.834542 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.835045 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.835181 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.835978 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.836105 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.836334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.836221 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.836658 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.838343 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.838427 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.839192 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.839197 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.839377 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.844437 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.844505 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.844581 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.844659 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.844802 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.849999 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.853007 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:50:48.043744092 +0000 UTC Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.884298 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.897823 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.912564 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914649 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914724 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914787 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914879 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914897 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914921 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914946 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.914967 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.915043 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.915137 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:14.415108622 +0000 UTC m=+22.205774564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.915426 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.915488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.915513 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:14.415492272 +0000 UTC m=+22.206158304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.915832 4787 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.920294 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.920628 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.921860 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.929686 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.929735 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.929752 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.929847 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:14.42982184 +0000 UTC m=+22.220487782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.929875 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.930551 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.931370 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.931493 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.931575 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:13 crc kubenswrapper[4787]: E0219 19:19:13.931747 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:14.4317254 +0000 UTC m=+22.222391402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.939746 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.940417 4787 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.942393 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.945652 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.949105 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.958597 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:13 crc kubenswrapper[4787]: I0219 19:19:13.970310 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.007483 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.008042 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.009910 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9" exitCode=255 Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.009968 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9"} Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.010034 4787 scope.go:117] "RemoveContainer" containerID="b33ea1a99fbaffbd8fc53911933fbff31a803267162a604316122725dba1d002" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.015875 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.015925 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.015954 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.015982 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.016043 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.016068 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.016523 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.016464 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.016472 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.016965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017068 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017076 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017080 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017174 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017195 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017240 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017235 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017278 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017407 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017346 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017538 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.017772 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018059 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018183 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018223 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018276 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018305 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018330 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018356 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018361 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018382 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018410 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018429 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018436 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018485 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018515 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018544 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018569 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018593 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018639 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018675 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018700 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018731 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018759 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018785 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018811 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018836 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018927 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018939 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018954 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018966 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.018983 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019008 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019030 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019053 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019086 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019076 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019145 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019156 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019198 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019221 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019245 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019266 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019289 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019304 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019314 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019324 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019381 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019382 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019416 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019453 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019480 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019506 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019534 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019564 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019589 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019640 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019475 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019535 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019557 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.019660 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:19:14.519636221 +0000 UTC m=+22.310302333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022172 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022195 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022217 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022237 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022254 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022271 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022290 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022297 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022307 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022324 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022350 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022383 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022409 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022433 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022491 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022514 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022541 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022557 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022566 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022590 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022631 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022655 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022679 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022753 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022797 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022819 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022840 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022892 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022915 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022938 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022963 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022986 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023008 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023031 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023052 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023112 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023136 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023160 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023184 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023211 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023238 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023267 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023293 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023316 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023338 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023363 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023384 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023405 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023428 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023450 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023473 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024204 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024268 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024323 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024365 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024400 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024432 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024523 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024555 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024577 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024646 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024673 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025238 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025312 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025342 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025407 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025436 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025477 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025578 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025637 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025743 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025774 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025808 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025840 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025873 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025905 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025930 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025963 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026024 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026044 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026065 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026088 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026113 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026149 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026181 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026211 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026264 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026368 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026400 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026445 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026475 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026535 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026570 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026600 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026643 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026672 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026705 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026733 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026775 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026806 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026835 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026873 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026904 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026933 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026958 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026990 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027022 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027054 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027090 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027175 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027207 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027234 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027307 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027336 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027369 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027407 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027451 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027478 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027513 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027553 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027583 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027645 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027789 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027935 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028078 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028106 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028121 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028133 4787 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028143 4787 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028158 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028168 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019752 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019928 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.019935 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.020101 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.020190 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.020491 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.020874 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021050 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021098 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021259 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021298 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021532 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021702 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.021887 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.022100 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023178 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.023491 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024500 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024647 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024773 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.024969 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028389 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025328 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025459 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025587 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025765 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025758 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025780 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026109 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028699 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028178 4787 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028753 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028768 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028782 4787 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028799 4787 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028814 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028826 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028844 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028909 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028967 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028993 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029010 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029035 4787 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029052 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029071 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029086 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029097 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029110 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029123 4787 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029139 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029152 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029432 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026269 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026354 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026514 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026536 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.026677 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027003 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027019 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027391 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027445 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027459 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.027520 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028038 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.028165 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.025042 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.029757 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.030270 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.030748 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.030866 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.031240 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.031391 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.032155 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.033642 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.033841 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.034007 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.034145 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.034192 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.033461 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.034696 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.035519 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.035894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.040051 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.034201 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.042316 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.042398 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.035927 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.042426 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.043145 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.043285 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.043656 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.043964 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.044468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.044908 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.045090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.045236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.045218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.045262 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.045760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.045775 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.047263 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.042873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.048014 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.048217 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.048603 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.048985 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.049267 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.049626 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.052032 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.052230 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.052281 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.053057 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.053254 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.053384 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055410 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.056328 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055733 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055776 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055890 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055936 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055997 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.056186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055406 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.056776 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.055076 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.057747 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.058176 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.058538 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.059245 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.059697 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.059829 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.059866 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.059894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.060019 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.060043 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.060586 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.060635 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.060641 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.060932 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.061155 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.061257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.062402 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.062565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.062841 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.064378 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.066739 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.067325 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.067876 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.067928 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.068071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.068151 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.068484 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.068999 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.070993 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.071231 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.072337 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.072623 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.072663 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.072625 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.072813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.073081 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.073821 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.074721 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.077927 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.078474 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.078478 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.079151 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.078181 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.080912 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.080993 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.081026 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.081092 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.081257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.081334 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.081540 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.081729 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082090 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082477 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082530 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082554 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082885 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.082941 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.083039 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.083070 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.090094 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.091343 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.094348 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.106197 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.106197 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.108951 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.122372 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130888 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130935 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130947 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130957 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130968 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130978 4787 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130988 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.130997 4787 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131006 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131016 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131024 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131033 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131045 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131054 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131063 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131072 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131080 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131089 4787 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131098 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131106 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131116 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131125 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131133 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131141 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131151 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131162 4787 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131171 4787 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131181 4787 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131190 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131200 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131209 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131218 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131227 4787 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131235 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131244 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131254 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131263 4787 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131274 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131286 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131313 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131323 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131333 4787 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131343 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131353 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131361 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131370 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131379 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131388 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131397 4787 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131406 4787 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131416 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131426 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131435 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131444 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131453 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131463 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131473 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131483 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131492 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131501 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131509 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131518 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131527 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131539 4787 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131547 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131557 4787 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131568 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131579 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131589 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131599 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131622 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131644 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131655 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131664 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131673 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131683 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131693 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131703 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131713 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131722 4787 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131731 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131740 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131749 4787 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131758 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131768 4787 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131777 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131786 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131795 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131804 4787 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131813 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131824 4787 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131833 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131842 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131851 4787 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131860 4787 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131869 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131900 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131909 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131918 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131930 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131939 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131947 4787 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131959 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131969 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131978 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131987 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.131996 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132004 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132014 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132021 4787 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132030 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132038 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132046 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132056 4787 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132063 4787 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132072 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132081 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132089 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132097 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132105 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132115 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132123 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132131 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132139 4787 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132147 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132156 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132164 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132173 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132181 4787 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132190 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132199 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132207 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132216 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132224 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132232 4787 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132246 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132255 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132263 4787 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132272 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132280 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132288 4787 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132296 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132305 4787 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132313 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132320 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132329 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132337 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132345 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132353 4787 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132361 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132370 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132379 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132387 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132395 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132404 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132412 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132420 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132428 4787 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132437 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132446 4787 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132454 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132463 4787 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132471 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132479 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132489 4787 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.132497 4787 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.155554 4787 scope.go:117] "RemoveContainer" containerID="7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9" Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.155817 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.158768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.159078 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.172215 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.175063 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.436142 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.436302 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436260 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436397 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:15.436383083 +0000 UTC m=+23.227049025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436495 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436587 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:15.436566628 +0000 UTC m=+23.227232570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436696 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436747 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436761 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436818 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:15.436800524 +0000 UTC m=+23.227466466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.436328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.436873 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436959 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436973 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.436984 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.437016 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:15.43700889 +0000 UTC m=+23.227674832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.537779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:14 crc kubenswrapper[4787]: E0219 19:19:14.538101 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:19:15.538062898 +0000 UTC m=+23.328728840 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.562832 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 19:14:13 +0000 UTC, rotation deadline is 2026-12-27 09:44:56.269599394 +0000 UTC Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.562895 4787 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7454h25m41.706706421s for next certificate rotation Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.696225 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-44jcg"] Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.696600 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.698990 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.699053 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.699229 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.714933 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b33ea1a99fbaffbd8fc53911933fbff31a803267162a604316122725dba1d002\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:18:56Z\\\",\\\"message\\\":\\\"W0219 19:18:56.336560 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 19:18:56.336934 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771528736 cert, and key in /tmp/serving-cert-3641618204/serving-signer.crt, /tmp/serving-cert-3641618204/serving-signer.key\\\\nI0219 19:18:56.515031 1 observer_polling.go:159] Starting file observer\\\\nW0219 19:18:56.517952 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 19:18:56.518076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:18:56.519965 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3641618204/tls.crt::/tmp/serving-cert-3641618204/tls.key\\\\\\\"\\\\nF0219 19:18:56.761166 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.725458 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.738390 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.769900 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.791274 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.809241 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.820736 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.840303 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.840429 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvsww\" (UniqueName: \"kubernetes.io/projected/8c7b543f-66f3-4657-b0b6-2f47a4a40d40-kube-api-access-mvsww\") pod \"node-resolver-44jcg\" (UID: \"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\") " pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.840490 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8c7b543f-66f3-4657-b0b6-2f47a4a40d40-hosts-file\") pod \"node-resolver-44jcg\" (UID: \"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\") " pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.853696 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:29:12.568732808 +0000 UTC Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.895280 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.896079 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.927228 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.941290 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvsww\" (UniqueName: \"kubernetes.io/projected/8c7b543f-66f3-4657-b0b6-2f47a4a40d40-kube-api-access-mvsww\") pod \"node-resolver-44jcg\" (UID: \"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\") " pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.941340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8c7b543f-66f3-4657-b0b6-2f47a4a40d40-hosts-file\") pod \"node-resolver-44jcg\" (UID: \"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\") " pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.941420 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8c7b543f-66f3-4657-b0b6-2f47a4a40d40-hosts-file\") pod \"node-resolver-44jcg\" (UID: \"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\") " pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:14 crc kubenswrapper[4787]: I0219 19:19:14.965589 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvsww\" (UniqueName: \"kubernetes.io/projected/8c7b543f-66f3-4657-b0b6-2f47a4a40d40-kube-api-access-mvsww\") pod \"node-resolver-44jcg\" (UID: \"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\") " pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.009843 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-44jcg" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.016082 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.018525 4787 scope.go:117] "RemoveContainer" containerID="7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9" Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.018709 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.030961 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.044561 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.048893 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.049716 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.050203 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.050868 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.051431 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.052301 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.055695 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.074141 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.081834 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.097297 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.099808 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.100490 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.101457 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.102601 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.103158 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.104205 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.104750 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.106087 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.107008 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.107747 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.109045 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.109586 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.110163 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.111107 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.112699 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.113840 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.114633 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.115937 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.116498 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.116761 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.117618 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.118471 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.119019 4787 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.119494 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.121387 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.122420 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.123030 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.124598 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.125820 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.130140 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.131052 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.132400 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.133604 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.134322 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.135863 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.136544 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.138916 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.139447 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.140420 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.140952 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.142160 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.142814 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.143675 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.144209 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.145125 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.148455 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.148961 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.149871 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.149894 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890"} Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.149914 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9cgws"] Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.150381 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e85a5b8a8785073b1ad28e81db146058195a5657a964b48d1859f24d1cddab27"} Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.150394 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wlszq"] Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.150581 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5xjgd"] Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.151552 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qxzkq"] Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.151830 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a1e034748411dfb891f63e63a8d68d3e202befdd2dca0a2d7732b1349d4bac34"} Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.151853 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1ba0eecdacb530ebb639ca68333392fde500da91bc2ca11f3f82a10f80879b31"} Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.151908 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.151954 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.151970 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.152042 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.158611 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.158812 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.159251 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.159324 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.159444 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.159872 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.159866 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.160781 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.161886 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.162091 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.164715 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166087 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166316 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166595 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166692 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166708 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166755 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166750 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.166837 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.174077 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.176592 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.182932 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.190613 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.209352 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.225861 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.239090 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244341 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-system-cni-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244513 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244612 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vld82\" (UniqueName: \"kubernetes.io/projected/0af035a6-d8a5-4686-b509-ec321548b323-kube-api-access-vld82\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244727 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00bdf088-5e51-4d51-9cb1-8e590898482c-mcd-auth-proxy-config\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244823 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-cnibin\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00bdf088-5e51-4d51-9cb1-8e590898482c-proxy-tls\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.244966 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-multus-certs\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245030 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-netd\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245092 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0706129-aa73-40ed-899f-02882ed5a4cc-cni-binary-copy\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245172 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-cni-bin\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245263 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-cni-multus\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-cnibin\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-ovn\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-ovn-kubernetes\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245618 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-config\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245709 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mht\" (UniqueName: \"kubernetes.io/projected/4989ff60-0c48-4f78-bcf6-2d394ee929fd-kube-api-access-r4mht\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245785 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42sc\" (UniqueName: \"kubernetes.io/projected/f0706129-aa73-40ed-899f-02882ed5a4cc-kube-api-access-j42sc\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245852 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245913 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-os-release\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.245977 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-systemd\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246047 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-etc-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246116 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-node-log\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246245 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-script-lib\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-systemd-units\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246383 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-conf-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246455 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-os-release\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246524 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-daemon-config\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246551 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-netns\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246637 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-bin\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-k8s-cni-cncf-io\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-etc-kubernetes\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8fd\" (UniqueName: \"kubernetes.io/projected/00bdf088-5e51-4d51-9cb1-8e590898482c-kube-api-access-sp8fd\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-system-cni-dir\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-slash\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246817 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-var-lib-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-env-overrides\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246868 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-cni-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-kubelet\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246917 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-hostroot\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00bdf088-5e51-4d51-9cb1-8e590898482c-rootfs\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.246967 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-kubelet\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.247006 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-log-socket\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.247030 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovn-node-metrics-cert\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.247053 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-socket-dir-parent\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.247077 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0af035a6-d8a5-4686-b509-ec321548b323-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.247104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-netns\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.247128 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0af035a6-d8a5-4686-b509-ec321548b323-cni-binary-copy\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.254247 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.264748 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.271843 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.280722 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.295857 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.305158 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.313782 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.341898 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-netns\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347745 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovn-node-metrics-cert\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347761 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-socket-dir-parent\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0af035a6-d8a5-4686-b509-ec321548b323-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347797 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0af035a6-d8a5-4686-b509-ec321548b323-cni-binary-copy\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347824 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-system-cni-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-netns\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347837 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347937 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vld82\" (UniqueName: \"kubernetes.io/projected/0af035a6-d8a5-4686-b509-ec321548b323-kube-api-access-vld82\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-socket-dir-parent\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347979 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-system-cni-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.347991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00bdf088-5e51-4d51-9cb1-8e590898482c-mcd-auth-proxy-config\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348051 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-cnibin\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348078 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00bdf088-5e51-4d51-9cb1-8e590898482c-proxy-tls\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-netd\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348122 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0706129-aa73-40ed-899f-02882ed5a4cc-cni-binary-copy\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348146 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-multus-certs\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-cnibin\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348204 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-cni-bin\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348229 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-cni-multus\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348254 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348275 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-ovn\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348296 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-ovn-kubernetes\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348317 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-config\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348343 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mht\" (UniqueName: \"kubernetes.io/projected/4989ff60-0c48-4f78-bcf6-2d394ee929fd-kube-api-access-r4mht\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348360 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-cnibin\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j42sc\" (UniqueName: \"kubernetes.io/projected/f0706129-aa73-40ed-899f-02882ed5a4cc-kube-api-access-j42sc\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348416 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-os-release\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348434 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348446 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-systemd-units\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348473 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-systemd\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348498 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-etc-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348536 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-node-log\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348561 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348587 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-script-lib\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348604 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-netd\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348608 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-conf-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348659 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-conf-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348672 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-os-release\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-systemd-units\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348700 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-daemon-config\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-bin\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348751 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-os-release\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-netns\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348787 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348798 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-k8s-cni-cncf-io\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-etc-kubernetes\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8fd\" (UniqueName: \"kubernetes.io/projected/00bdf088-5e51-4d51-9cb1-8e590898482c-kube-api-access-sp8fd\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348865 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-slash\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348886 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-var-lib-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-system-cni-dir\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-kubelet\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348952 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-env-overrides\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348971 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-cni-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.348993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-kubelet\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349014 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-hostroot\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00bdf088-5e51-4d51-9cb1-8e590898482c-rootfs\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-log-socket\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349057 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00bdf088-5e51-4d51-9cb1-8e590898482c-mcd-auth-proxy-config\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349123 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-log-socket\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349126 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-multus-certs\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349169 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-slash\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-cnibin\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-var-lib-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349245 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0af035a6-d8a5-4686-b509-ec321548b323-cni-binary-copy\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349266 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-cni-bin\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349272 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-cni-multus\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349294 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-systemd\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349303 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-kubelet\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-ovn\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-ovn-kubernetes\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-node-log\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349495 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-etc-openvswitch\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-bin\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349629 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-netns\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349702 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-os-release\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349723 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-run-k8s-cni-cncf-io\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349742 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-etc-kubernetes\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349761 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-host-var-lib-kubelet\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349783 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.349805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00bdf088-5e51-4d51-9cb1-8e590898482c-rootfs\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.350737 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-cni-dir\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.350834 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f0706129-aa73-40ed-899f-02882ed5a4cc-hostroot\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.351318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0af035a6-d8a5-4686-b509-ec321548b323-system-cni-dir\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.351782 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-env-overrides\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.351888 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-script-lib\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.352187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f0706129-aa73-40ed-899f-02882ed5a4cc-multus-daemon-config\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.352187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0af035a6-d8a5-4686-b509-ec321548b323-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.354473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f0706129-aa73-40ed-899f-02882ed5a4cc-cni-binary-copy\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.354542 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00bdf088-5e51-4d51-9cb1-8e590898482c-proxy-tls\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.355336 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-config\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.361280 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovn-node-metrics-cert\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.381902 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.383231 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8fd\" (UniqueName: \"kubernetes.io/projected/00bdf088-5e51-4d51-9cb1-8e590898482c-kube-api-access-sp8fd\") pod \"machine-config-daemon-wlszq\" (UID: \"00bdf088-5e51-4d51-9cb1-8e590898482c\") " pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.389048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j42sc\" (UniqueName: \"kubernetes.io/projected/f0706129-aa73-40ed-899f-02882ed5a4cc-kube-api-access-j42sc\") pod \"multus-qxzkq\" (UID: \"f0706129-aa73-40ed-899f-02882ed5a4cc\") " pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.390490 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vld82\" (UniqueName: \"kubernetes.io/projected/0af035a6-d8a5-4686-b509-ec321548b323-kube-api-access-vld82\") pod \"multus-additional-cni-plugins-9cgws\" (UID: \"0af035a6-d8a5-4686-b509-ec321548b323\") " pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.394577 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mht\" (UniqueName: \"kubernetes.io/projected/4989ff60-0c48-4f78-bcf6-2d394ee929fd-kube-api-access-r4mht\") pod \"ovnkube-node-5xjgd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.398655 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.425321 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.443260 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.450511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.450556 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.450581 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.450609 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450759 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450775 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450785 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450790 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450828 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:17.450815774 +0000 UTC m=+25.241481716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450842 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:17.450837295 +0000 UTC m=+25.241503237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450857 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450945 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:17.450927367 +0000 UTC m=+25.241593309 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.450979 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.451042 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.451059 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.451135 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:17.451112472 +0000 UTC m=+25.241778474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.462934 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.476923 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.489163 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.496983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qxzkq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.499522 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.505785 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9cgws" Feb 19 19:19:15 crc kubenswrapper[4787]: W0219 19:19:15.512366 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0706129_aa73_40ed_899f_02882ed5a4cc.slice/crio-df82b22e1d9a1a6094d5ee23c2625f722c4be776766a1c6bcd5c6c2a4881b3b7 WatchSource:0}: Error finding container df82b22e1d9a1a6094d5ee23c2625f722c4be776766a1c6bcd5c6c2a4881b3b7: Status 404 returned error can't find the container with id df82b22e1d9a1a6094d5ee23c2625f722c4be776766a1c6bcd5c6c2a4881b3b7 Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.512876 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.518860 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:15 crc kubenswrapper[4787]: W0219 19:19:15.519741 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0af035a6_d8a5_4686_b509_ec321548b323.slice/crio-86b25c07238591907182f4ee760fee00965b6418eeb41d554f1177a3d1868743 WatchSource:0}: Error finding container 86b25c07238591907182f4ee760fee00965b6418eeb41d554f1177a3d1868743: Status 404 returned error can't find the container with id 86b25c07238591907182f4ee760fee00965b6418eeb41d554f1177a3d1868743 Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.522224 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.533031 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: W0219 19:19:15.538844 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00bdf088_5e51_4d51_9cb1_8e590898482c.slice/crio-8882b003e79a825476f63a4e949373721a6cfc3b9e4818815e03cea52290e082 WatchSource:0}: Error finding container 8882b003e79a825476f63a4e949373721a6cfc3b9e4818815e03cea52290e082: Status 404 returned error can't find the container with id 8882b003e79a825476f63a4e949373721a6cfc3b9e4818815e03cea52290e082 Feb 19 19:19:15 crc kubenswrapper[4787]: W0219 19:19:15.540027 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4989ff60_0c48_4f78_bcf6_2d394ee929fd.slice/crio-3af053205aac22ed6c5d8d0bddbb344c801f6380db7a4f0e55f52596bfa23fb0 WatchSource:0}: Error finding container 3af053205aac22ed6c5d8d0bddbb344c801f6380db7a4f0e55f52596bfa23fb0: Status 404 returned error can't find the container with id 3af053205aac22ed6c5d8d0bddbb344c801f6380db7a4f0e55f52596bfa23fb0 Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.551188 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.551335 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:19:17.551314448 +0000 UTC m=+25.341980400 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.551864 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.565527 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.854438 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:19:09.450551896 +0000 UTC Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.891253 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.891284 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:15 crc kubenswrapper[4787]: I0219 19:19:15.891346 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.891380 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.891478 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:15 crc kubenswrapper[4787]: E0219 19:19:15.891546 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.025069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerStarted","Data":"66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.025148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerStarted","Data":"86b25c07238591907182f4ee760fee00965b6418eeb41d554f1177a3d1868743"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.026420 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerStarted","Data":"ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.026454 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerStarted","Data":"df82b22e1d9a1a6094d5ee23c2625f722c4be776766a1c6bcd5c6c2a4881b3b7"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.028742 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24" exitCode=0 Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.028862 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.028898 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"3af053205aac22ed6c5d8d0bddbb344c801f6380db7a4f0e55f52596bfa23fb0"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.032662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.032705 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.032715 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"8882b003e79a825476f63a4e949373721a6cfc3b9e4818815e03cea52290e082"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.035141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.035177 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.036762 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-44jcg" event={"ID":"8c7b543f-66f3-4657-b0b6-2f47a4a40d40","Type":"ContainerStarted","Data":"6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.036852 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-44jcg" event={"ID":"8c7b543f-66f3-4657-b0b6-2f47a4a40d40","Type":"ContainerStarted","Data":"e81c39953e33e533ac8d4bde0205eef0dd3c9ba560d7eabe05fb006eb1450669"} Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.044826 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: E0219 19:19:16.046924 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.059469 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.072918 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.091005 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.103340 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.117533 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.132739 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.146260 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.163337 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.175416 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.189421 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.213259 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.243416 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.260999 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.273236 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.285137 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.300557 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.314266 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.330603 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.349270 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.365899 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.376965 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.389639 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.404272 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.422247 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.443736 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:16 crc kubenswrapper[4787]: I0219 19:19:16.855264 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:31:18.109743939 +0000 UTC Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.454648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.454967 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.455043 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.455114 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.455188 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.455265 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.460893 4787 generic.go:334] "Generic (PLEG): container finished" podID="0af035a6-d8a5-4686-b509-ec321548b323" containerID="66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e" exitCode=0 Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.461090 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerDied","Data":"66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e"} Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.486200 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.502470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.502531 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.502575 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.502795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.503041 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.503186 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:21.503168326 +0000 UTC m=+29.293834268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.503185 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.503230 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.503245 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.503310 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:21.503290649 +0000 UTC m=+29.293956601 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.507034 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.507069 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.507091 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.507166 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:21.50713687 +0000 UTC m=+29.297802812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.507548 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.507612 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:21.507590072 +0000 UTC m=+29.298256014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.519971 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.541262 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.553773 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.570663 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.593142 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.603481 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.603751 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:19:21.603731151 +0000 UTC m=+29.394397093 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.613798 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.630614 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.649754 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.666014 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.684787 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.697007 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.708953 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.856835 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:01:44.519125963 +0000 UTC Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.891143 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.891174 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:17 crc kubenswrapper[4787]: I0219 19:19:17.891217 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.891314 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.891391 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:17 crc kubenswrapper[4787]: E0219 19:19:17.891518 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.468346 4787 generic.go:334] "Generic (PLEG): container finished" podID="0af035a6-d8a5-4686-b509-ec321548b323" containerID="ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3" exitCode=0 Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.468453 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerDied","Data":"ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3"} Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.471678 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c"} Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.488097 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.506821 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.522313 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.535638 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-z4xw6"] Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.536046 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.539573 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.539883 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.540106 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.540284 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.542817 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.562454 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.586258 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.604721 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.615681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc0cf1c-007a-4057-b79c-86396b74ca3e-host\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.615736 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wj4k\" (UniqueName: \"kubernetes.io/projected/dcc0cf1c-007a-4057-b79c-86396b74ca3e-kube-api-access-8wj4k\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.615788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcc0cf1c-007a-4057-b79c-86396b74ca3e-serviceca\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.617822 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.630490 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.644176 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.657328 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.671099 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.684008 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.706761 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.716566 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wj4k\" (UniqueName: \"kubernetes.io/projected/dcc0cf1c-007a-4057-b79c-86396b74ca3e-kube-api-access-8wj4k\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.716652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcc0cf1c-007a-4057-b79c-86396b74ca3e-serviceca\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.716695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc0cf1c-007a-4057-b79c-86396b74ca3e-host\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.716789 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dcc0cf1c-007a-4057-b79c-86396b74ca3e-host\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.718413 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dcc0cf1c-007a-4057-b79c-86396b74ca3e-serviceca\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.728010 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.735014 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wj4k\" (UniqueName: \"kubernetes.io/projected/dcc0cf1c-007a-4057-b79c-86396b74ca3e-kube-api-access-8wj4k\") pod \"node-ca-z4xw6\" (UID: \"dcc0cf1c-007a-4057-b79c-86396b74ca3e\") " pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.743520 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.760862 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.772470 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.799181 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.821181 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.843950 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.850179 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z4xw6" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.857476 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:54:59.308276554 +0000 UTC Feb 19 19:19:18 crc kubenswrapper[4787]: W0219 19:19:18.871951 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcc0cf1c_007a_4057_b79c_86396b74ca3e.slice/crio-8c93f45f5b5d433981fa5c68525fc04129cc70f6c31d0fe5073fa5ad2053006f WatchSource:0}: Error finding container 8c93f45f5b5d433981fa5c68525fc04129cc70f6c31d0fe5073fa5ad2053006f: Status 404 returned error can't find the container with id 8c93f45f5b5d433981fa5c68525fc04129cc70f6c31d0fe5073fa5ad2053006f Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.877734 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.905684 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.924363 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.941876 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.956296 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:18 crc kubenswrapper[4787]: I0219 19:19:18.969019 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.481121 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.483781 4787 generic.go:334] "Generic (PLEG): container finished" podID="0af035a6-d8a5-4686-b509-ec321548b323" containerID="1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608" exitCode=0 Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.483872 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerDied","Data":"1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.485776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z4xw6" event={"ID":"dcc0cf1c-007a-4057-b79c-86396b74ca3e","Type":"ContainerStarted","Data":"0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.485892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z4xw6" event={"ID":"dcc0cf1c-007a-4057-b79c-86396b74ca3e","Type":"ContainerStarted","Data":"8c93f45f5b5d433981fa5c68525fc04129cc70f6c31d0fe5073fa5ad2053006f"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.504852 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.515786 4787 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.519570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.519626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.519638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.519770 4787 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.523757 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.528552 4787 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.528801 4787 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.530178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.530219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.530233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.530251 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.530260 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.534472 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.549401 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.549999 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.553384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.553413 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.553424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.553441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.553455 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.565608 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.566283 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.570460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.570512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.570525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.570549 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.570564 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.585032 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.586793 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.589084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.589135 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.589148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.589168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.589181 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.603536 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.607248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.607294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.607310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.607328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.607341 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.608769 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.622862 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.623021 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.624902 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.624950 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.624970 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.624988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.624999 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.635981 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.649815 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.664428 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.676258 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.688707 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.703349 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.719621 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.732269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.732309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.732318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.732334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.732347 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.743275 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.759259 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.773008 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.783576 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.796783 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.824748 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.843404 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.843659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.843693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.843705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.843719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.843730 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.856783 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.857803 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:51:30.790133097 +0000 UTC Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.873705 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.890425 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.890780 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.890868 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.891108 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.891162 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.891200 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:19 crc kubenswrapper[4787]: E0219 19:19:19.891239 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.915848 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.936227 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.946799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.946866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.946878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.946901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.946916 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:19Z","lastTransitionTime":"2026-02-19T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.953029 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:19 crc kubenswrapper[4787]: I0219 19:19:19.965643 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:19Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.050316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.050387 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.050414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.050445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.050470 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.153795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.153862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.153874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.153891 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.153907 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.190268 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.194627 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.201320 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.207340 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.219086 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.231324 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.244048 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.255755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.255791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.255802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.255821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.255836 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.256428 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.281980 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.330187 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.347699 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.358152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.358195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.358208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.358227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.358239 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.363991 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.375192 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.387759 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.403288 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.424101 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.444348 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.460926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.460980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.461001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.461025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.461044 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.463598 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.482995 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.490960 4787 generic.go:334] "Generic (PLEG): container finished" podID="0af035a6-d8a5-4686-b509-ec321548b323" containerID="7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46" exitCode=0 Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.491082 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerDied","Data":"7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.503387 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.518201 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.551186 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.563715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.563752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.563763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.563777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.563788 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.565577 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.578397 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.592416 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.609185 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.625172 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.640165 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.656872 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.666601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.666683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.666697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.666722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.666734 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.672933 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.679336 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.680015 4787 scope.go:117] "RemoveContainer" containerID="7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9" Feb 19 19:19:20 crc kubenswrapper[4787]: E0219 19:19:20.680184 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.690379 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.702579 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.717450 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.733056 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.749856 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.764160 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.769433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.769481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.769492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.769508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.769519 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.778110 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.798160 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.824951 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.840971 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.855722 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.858366 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:03:25.291743794 +0000 UTC Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.870297 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.872273 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.872306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.872314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.872329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.872341 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.885561 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.897358 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.910406 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.923447 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.937310 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:20Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.974977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.975015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.975024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.975040 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:20 crc kubenswrapper[4787]: I0219 19:19:20.975054 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:20Z","lastTransitionTime":"2026-02-19T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.087591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.087666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.087705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.087730 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.087765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.190687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.190737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.190746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.190763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.190773 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.294367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.294429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.294439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.294460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.294474 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.398508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.398574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.398587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.398614 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.398651 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.502051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.502123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.502137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.502160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.502174 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.507532 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerStarted","Data":"b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.515275 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.515692 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.515763 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.515781 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.523590 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.540386 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.548159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.548241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.548315 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.548363 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.548502 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.548531 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.548544 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.548607 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:29.548585214 +0000 UTC m=+37.339251156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549003 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549032 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549047 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549067 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549098 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:29.549077557 +0000 UTC m=+37.339743499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549121 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:29.549111138 +0000 UTC m=+37.339777070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549168 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.549200 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:29.54919153 +0000 UTC m=+37.339857472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.554895 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.555251 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.556726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.569257 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.587096 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.605696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.605756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.605773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.605794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.605808 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.606671 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.621638 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.637689 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.649139 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.649397 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:19:29.649359554 +0000 UTC m=+37.440025496 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.655889 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.674795 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.687884 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.701900 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.708753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.708820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.708835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.708857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.708868 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.725869 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.755923 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.778850 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.800053 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.811291 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.811351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.811364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.811384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.811397 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.819972 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.836086 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.854428 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.858670 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:54:08.997970852 +0000 UTC Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.870277 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.883385 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.891526 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.891582 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.891658 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.891767 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.891993 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:21 crc kubenswrapper[4787]: E0219 19:19:21.892069 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.898923 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.914857 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.928487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.928545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.928558 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.928576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.928590 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:21Z","lastTransitionTime":"2026-02-19T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.930003 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.946545 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.963004 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.983683 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:21 crc kubenswrapper[4787]: I0219 19:19:21.997595 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.012592 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.031393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.031440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.031454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.031473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.031486 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.031694 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.134084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.134162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.134175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.134197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.134211 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.237894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.237961 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.237972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.238000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.238040 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.340727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.340782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.340795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.340816 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.340831 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.443532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.443593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.443606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.443653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.443665 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.522459 4787 generic.go:334] "Generic (PLEG): container finished" podID="0af035a6-d8a5-4686-b509-ec321548b323" containerID="b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994" exitCode=0 Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.522543 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerDied","Data":"b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.541519 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.548836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.548886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.548903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.548928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.548946 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.558462 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.573235 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.588785 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.602220 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.617422 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.632081 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.645443 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.651796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.651859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.651878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.651906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.651925 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.664779 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.677183 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.687985 4787 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.690144 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-qxzkq/status\": read tcp 38.102.83.150:37368->38.102.83.150:6443: use of closed network connection" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.723851 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.755746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.755805 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.755817 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.755838 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.755851 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.776153 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.797879 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.827088 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.858861 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:58:14.736187041 +0000 UTC Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.859998 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.860052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.860065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.860087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.860104 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.906941 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.920463 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.940655 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.957581 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.962796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.962843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.962855 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.962876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.962889 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:22Z","lastTransitionTime":"2026-02-19T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.971827 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.987723 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:22 crc kubenswrapper[4787]: I0219 19:19:22.999402 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:22Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.014836 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.029996 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.045116 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.064883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.064920 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.064931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.064947 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.064958 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.073900 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.096205 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.139946 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.155320 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.168007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.168073 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.168086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.168108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.168124 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.171012 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.270482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.270509 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.270517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.270529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.270540 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.373669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.373722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.373734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.373754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.373770 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.477537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.477656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.477668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.477684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.477696 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.530615 4787 generic.go:334] "Generic (PLEG): container finished" podID="0af035a6-d8a5-4686-b509-ec321548b323" containerID="369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262" exitCode=0 Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.530702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerDied","Data":"369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.546068 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.561420 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.574758 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.580099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.580148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.580160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.580182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.580194 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.588604 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.603612 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.630407 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.658282 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.674498 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.682657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.682701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.682715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.682736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.682751 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.690130 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.703724 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.716542 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.729901 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.745955 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.759484 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.775122 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:23Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.786038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.786083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.786095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.786115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.786125 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.860257 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:10:45.56462287 +0000 UTC Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.889057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.889116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.889126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.889146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.889158 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.891431 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.891516 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:23 crc kubenswrapper[4787]: E0219 19:19:23.891551 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.891435 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:23 crc kubenswrapper[4787]: E0219 19:19:23.891724 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:23 crc kubenswrapper[4787]: E0219 19:19:23.891840 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.993553 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.993649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.993666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.993688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:23 crc kubenswrapper[4787]: I0219 19:19:23.993703 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:23Z","lastTransitionTime":"2026-02-19T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.096954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.096990 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.097000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.097018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.097031 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.200789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.200840 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.200850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.200867 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.200878 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.303346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.303380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.303392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.303407 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.303418 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.406324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.406386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.406398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.406419 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.406430 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.509058 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.509121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.509131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.509144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.509154 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.539435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" event={"ID":"0af035a6-d8a5-4686-b509-ec321548b323","Type":"ContainerStarted","Data":"6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.567115 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.584758 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.600845 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.612039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.612114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.612136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.612168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.612193 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.622318 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.640016 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.660124 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.681757 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.697654 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.715012 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.715562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.715606 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.715638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.715655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.715671 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.728109 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.753309 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.782444 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.797738 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.813913 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.817760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.817808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.817823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.817848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.817862 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.833376 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.860833 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:05:33.971827103 +0000 UTC Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.920568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.920627 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.920640 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.920657 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:24 crc kubenswrapper[4787]: I0219 19:19:24.920670 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:24Z","lastTransitionTime":"2026-02-19T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.023144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.023232 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.023247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.023264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.023296 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.126170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.126220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.126229 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.126243 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.126253 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.229593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.229653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.229662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.229676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.229684 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.332462 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.332521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.332531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.332574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.332592 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.435973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.436078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.436105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.436144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.436174 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.538443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.538492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.538502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.538521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.538531 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.642186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.642236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.642249 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.642273 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.642287 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.749299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.749358 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.749371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.749396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.749412 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.856939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.857009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.857028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.857057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.857076 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.861409 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:39:39.646693536 +0000 UTC Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.891498 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.891569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.891673 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:25 crc kubenswrapper[4787]: E0219 19:19:25.891728 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:25 crc kubenswrapper[4787]: E0219 19:19:25.891867 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:25 crc kubenswrapper[4787]: E0219 19:19:25.892014 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.960052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.960098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.960108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.960124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:25 crc kubenswrapper[4787]: I0219 19:19:25.960134 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:25Z","lastTransitionTime":"2026-02-19T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.063008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.063053 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.063063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.063080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.063091 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.166275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.166328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.166340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.166360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.166374 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.269269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.269308 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.269319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.269333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.269342 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.372116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.372166 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.372179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.372197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.372208 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.475078 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.475127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.475139 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.475157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.475168 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.578424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.578491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.578502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.578517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.578528 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.681334 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.681407 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.681420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.681436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.681450 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.784142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.784200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.784212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.784233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.784246 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.862031 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:20:35.533922132 +0000 UTC Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.887353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.887389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.887398 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.887411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.887420 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.990418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.990472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.990484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.990505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:26 crc kubenswrapper[4787]: I0219 19:19:26.990520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:26Z","lastTransitionTime":"2026-02-19T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.092749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.092797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.092810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.092827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.092839 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.195094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.195136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.195149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.195166 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.195180 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.297460 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.297533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.297548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.297564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.297575 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.400107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.400186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.400209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.400233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.400252 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.503205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.503264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.503275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.503295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.503318 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.552276 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/0.log" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.555736 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358" exitCode=1 Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.555785 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.556594 4787 scope.go:117] "RemoveContainer" containerID="2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.573224 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.587892 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.603487 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.606210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.606269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.606286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.606320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.606336 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.619516 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.633410 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.652974 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.669153 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.681185 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.693605 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.710382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.710465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.710479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.710504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.710520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.714733 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.734816 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.755329 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:27Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0219 19:19:26.957558 6070 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 19:19:26.957611 6070 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:26.957762 6070 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:26.958169 6070 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 19:19:26.958272 6070 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 19:19:26.958299 6070 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 19:19:26.958306 6070 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 19:19:26.958339 6070 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 19:19:26.958364 6070 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 19:19:26.958376 6070 factory.go:656] Stopping watch factory\\\\nI0219 19:19:26.958393 6070 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 19:19:26.958403 6070 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:26.958407 6070 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 19:19:26.958417 6070 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.788926 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.805298 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.813330 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.813416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.813431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.813461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.813478 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.818258 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:27Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.863023 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:32:57.243552675 +0000 UTC Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.891470 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.891556 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.891660 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:27 crc kubenswrapper[4787]: E0219 19:19:27.891707 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:27 crc kubenswrapper[4787]: E0219 19:19:27.891882 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:27 crc kubenswrapper[4787]: E0219 19:19:27.892006 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.915626 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.915667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.915678 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.915698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:27 crc kubenswrapper[4787]: I0219 19:19:27.915710 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:27Z","lastTransitionTime":"2026-02-19T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.018324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.018392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.018405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.018426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.018441 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.121082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.121150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.121168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.121199 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.121222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.224113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.224166 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.224176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.224193 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.224204 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.326569 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.326632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.326644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.326663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.326675 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.378693 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2"] Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.379275 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.382494 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.383584 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.404062 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:27Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0219 19:19:26.957558 6070 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 19:19:26.957611 6070 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:26.957762 6070 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:26.958169 6070 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 19:19:26.958272 6070 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 19:19:26.958299 6070 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 19:19:26.958306 6070 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 19:19:26.958339 6070 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 19:19:26.958364 6070 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 19:19:26.958376 6070 factory.go:656] Stopping watch factory\\\\nI0219 19:19:26.958393 6070 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 19:19:26.958403 6070 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:26.958407 6070 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 19:19:26.958417 6070 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.416565 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.421869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b78cae4-54ac-423d-8d4e-27fadc07d335-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.421915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b78cae4-54ac-423d-8d4e-27fadc07d335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.421945 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-975jb\" (UniqueName: \"kubernetes.io/projected/7b78cae4-54ac-423d-8d4e-27fadc07d335-kube-api-access-975jb\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.421975 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b78cae4-54ac-423d-8d4e-27fadc07d335-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.429453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.429526 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.429537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.429557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.429569 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.437889 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.451144 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.465823 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.479259 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.493407 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.507955 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.522726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.523015 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b78cae4-54ac-423d-8d4e-27fadc07d335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.523087 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-975jb\" (UniqueName: \"kubernetes.io/projected/7b78cae4-54ac-423d-8d4e-27fadc07d335-kube-api-access-975jb\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.523118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b78cae4-54ac-423d-8d4e-27fadc07d335-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.523144 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b78cae4-54ac-423d-8d4e-27fadc07d335-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.523977 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b78cae4-54ac-423d-8d4e-27fadc07d335-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.524050 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b78cae4-54ac-423d-8d4e-27fadc07d335-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.530553 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b78cae4-54ac-423d-8d4e-27fadc07d335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.532253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.532294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.532323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.532345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.532362 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.538253 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.544711 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-975jb\" (UniqueName: \"kubernetes.io/projected/7b78cae4-54ac-423d-8d4e-27fadc07d335-kube-api-access-975jb\") pod \"ovnkube-control-plane-749d76644c-tfkn2\" (UID: \"7b78cae4-54ac-423d-8d4e-27fadc07d335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.554305 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.565797 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/0.log" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.570264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.570920 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.570937 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.584751 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.599937 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.619904 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.635095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.635157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.635170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.635187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.635200 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.637697 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.659895 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.678693 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:27Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0219 19:19:26.957558 6070 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 19:19:26.957611 6070 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:26.957762 6070 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:26.958169 6070 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 19:19:26.958272 6070 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 19:19:26.958299 6070 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 19:19:26.958306 6070 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 19:19:26.958339 6070 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 19:19:26.958364 6070 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 19:19:26.958376 6070 factory.go:656] Stopping watch factory\\\\nI0219 19:19:26.958393 6070 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 19:19:26.958403 6070 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:26.958407 6070 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 19:19:26.958417 6070 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.691672 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.692833 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.706889 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: W0219 19:19:28.707376 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b78cae4_54ac_423d_8d4e_27fadc07d335.slice/crio-2d17e5687af0caa2e467e40fc51ce5fb6f93e892b5b81e4a70999ec3ab2a09cc WatchSource:0}: Error finding container 2d17e5687af0caa2e467e40fc51ce5fb6f93e892b5b81e4a70999ec3ab2a09cc: Status 404 returned error can't find the container with id 2d17e5687af0caa2e467e40fc51ce5fb6f93e892b5b81e4a70999ec3ab2a09cc Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.723249 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.737869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.737928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.737942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.737960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.738320 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.741464 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.764059 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.779010 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.793080 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.806915 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.819134 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.836362 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.841524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.841566 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.841577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.841593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.841607 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.852510 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.863220 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:07:07.512027569 +0000 UTC Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.864733 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.879362 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.896470 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.944943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.944990 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.945000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.945019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:28 crc kubenswrapper[4787]: I0219 19:19:28.945031 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:28Z","lastTransitionTime":"2026-02-19T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.048391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.048452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.048472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.048498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.048520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.151332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.151384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.151395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.151413 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.151425 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.255731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.255789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.255799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.255822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.255834 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.358092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.358136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.358147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.358164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.358176 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.461141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.461208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.461221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.461239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.461253 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.564648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.564699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.564708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.564728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.564741 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.580428 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/1.log" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.581238 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/0.log" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.584471 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682" exitCode=1 Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.584556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.584601 4787 scope.go:117] "RemoveContainer" containerID="2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.585456 4787 scope.go:117] "RemoveContainer" containerID="3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.585679 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.587531 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" event={"ID":"7b78cae4-54ac-423d-8d4e-27fadc07d335","Type":"ContainerStarted","Data":"9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.587604 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" event={"ID":"7b78cae4-54ac-423d-8d4e-27fadc07d335","Type":"ContainerStarted","Data":"2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.587641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" event={"ID":"7b78cae4-54ac-423d-8d4e-27fadc07d335","Type":"ContainerStarted","Data":"2d17e5687af0caa2e467e40fc51ce5fb6f93e892b5b81e4a70999ec3ab2a09cc"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.602967 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.618948 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.633106 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.635928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.636039 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.636144 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.636206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.636677 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.636720 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.636736 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.636813 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:45.636783701 +0000 UTC m=+53.427449643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637034 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637046 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637088 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637121 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637142 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637187 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:45.637162901 +0000 UTC m=+53.427828843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637261 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:45.637246513 +0000 UTC m=+53.427912665 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.637311 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:19:45.637279554 +0000 UTC m=+53.427945496 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.648469 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.662469 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.667579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.667652 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.667663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.667685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.667696 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.679582 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.698024 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.713866 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.726091 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.737075 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.737359 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:19:45.737315735 +0000 UTC m=+53.527981727 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.740143 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.762173 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.770039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.770075 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.770084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.770098 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.770109 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.788442 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:27Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0219 19:19:26.957558 6070 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 19:19:26.957611 6070 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:26.957762 6070 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:26.958169 6070 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 19:19:26.958272 6070 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 19:19:26.958299 6070 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 19:19:26.958306 6070 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 19:19:26.958339 6070 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 19:19:26.958364 6070 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 19:19:26.958376 6070 factory.go:656] Stopping watch factory\\\\nI0219 19:19:26.958393 6070 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 19:19:26.958403 6070 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:26.958407 6070 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 19:19:26.958417 6070 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.803209 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.818104 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.820677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.820720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.820728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.820744 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.820755 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.833771 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.834735 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cv5f6"] Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.835335 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.835406 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.836692 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.840990 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.841031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.841041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.841057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.841068 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.847303 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.853286 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.857635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.857693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.857709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.857734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.857753 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.864099 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:20:10.195512955 +0000 UTC Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.867025 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ea876f144394b5a82796c464c7b97f6fcd2ef81313824f36643d139244be358\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:27Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0219 19:19:26.957558 6070 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 19:19:26.957611 6070 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:26.957762 6070 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:26.958169 6070 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 19:19:26.958272 6070 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 19:19:26.958299 6070 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 19:19:26.958306 6070 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 19:19:26.958339 6070 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 19:19:26.958364 6070 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 19:19:26.958376 6070 factory.go:656] Stopping watch factory\\\\nI0219 19:19:26.958393 6070 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 19:19:26.958403 6070 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:26.958407 6070 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 19:19:26.958417 6070 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.870338 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.874393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.874450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.874488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.874508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.874526 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.880511 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.886214 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.892723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.892788 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.892719 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.893093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.893133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.893146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.893083 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.893170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.893229 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.893253 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.893355 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.896074 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.905981 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: E0219 19:19:29.906095 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.907974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.908024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.908037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.908052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.908062 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:29Z","lastTransitionTime":"2026-02-19T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.916275 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.929496 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.939640 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dtl\" (UniqueName: \"kubernetes.io/projected/56f25fce-8c35-4786-94f3-93854459f32a-kube-api-access-p2dtl\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.939685 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.944039 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.955274 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.968732 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:29 crc kubenswrapper[4787]: I0219 19:19:29.984010 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:29.999925 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:29Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.011134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.011188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.011197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.011214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.011225 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.013093 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.025871 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.040684 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.040887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dtl\" (UniqueName: \"kubernetes.io/projected/56f25fce-8c35-4786-94f3-93854459f32a-kube-api-access-p2dtl\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.040931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:30 crc kubenswrapper[4787]: E0219 19:19:30.041038 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:30 crc kubenswrapper[4787]: E0219 19:19:30.041098 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:19:30.541080255 +0000 UTC m=+38.331746197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.055584 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.060683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dtl\" (UniqueName: \"kubernetes.io/projected/56f25fce-8c35-4786-94f3-93854459f32a-kube-api-access-p2dtl\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.074473 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.090300 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.106413 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.114625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.114686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.114697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.114719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.114731 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.217446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.217780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.217909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.217989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.218071 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.321877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.321960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.321984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.322010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.322029 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.424588 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.425084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.425263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.425422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.425656 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.528541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.528666 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.528697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.528732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.528758 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.547980 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:30 crc kubenswrapper[4787]: E0219 19:19:30.548231 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:30 crc kubenswrapper[4787]: E0219 19:19:30.548328 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:19:31.548300325 +0000 UTC m=+39.338966307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.594372 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/1.log" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.601147 4787 scope.go:117] "RemoveContainer" containerID="3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682" Feb 19 19:19:30 crc kubenswrapper[4787]: E0219 19:19:30.601466 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.619022 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.632186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.632520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.632635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.632730 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.632833 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.639731 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.670269 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.698037 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.717232 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.732237 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.738954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.739007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.739021 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.739044 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.739059 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.760673 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.780990 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.794567 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.810098 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.826060 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.842061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.842113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.842127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.842148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.842161 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.844423 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.858459 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.864989 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:30:30.104948824 +0000 UTC Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.873654 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.886017 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.891652 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:30 crc kubenswrapper[4787]: E0219 19:19:30.891793 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.900593 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.915818 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:30Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.944134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.944177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.944187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.944206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:30 crc kubenswrapper[4787]: I0219 19:19:30.944218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:30Z","lastTransitionTime":"2026-02-19T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.047170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.047244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.047256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.047278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.047291 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.150241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.150689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.150700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.150715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.150725 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.253439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.253481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.253494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.253513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.253526 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.357099 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.357150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.357163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.357179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.357198 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.460397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.460475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.460487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.460510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.460524 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.557697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:31 crc kubenswrapper[4787]: E0219 19:19:31.557985 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:31 crc kubenswrapper[4787]: E0219 19:19:31.558149 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:19:33.558115884 +0000 UTC m=+41.348782026 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.563137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.563211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.563224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.563242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.563258 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.666096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.666172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.666185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.666204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.666218 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.768850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.768922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.768936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.768953 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.768965 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.865312 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:34:19.476157925 +0000 UTC Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.871580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.871701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.871717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.871732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.871741 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.890981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.891059 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.891012 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:31 crc kubenswrapper[4787]: E0219 19:19:31.891126 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:31 crc kubenswrapper[4787]: E0219 19:19:31.891302 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:31 crc kubenswrapper[4787]: E0219 19:19:31.891347 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.976120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.976154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.976162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.976177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:31 crc kubenswrapper[4787]: I0219 19:19:31.976186 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:31Z","lastTransitionTime":"2026-02-19T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.078642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.078711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.078728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.078750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.078765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.181220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.181272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.181284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.181302 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.181315 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.283826 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.283887 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.283908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.283933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.283947 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.390624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.390669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.390684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.390708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.390721 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.495100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.495171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.495188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.495209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.495223 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.598319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.598371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.598382 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.598399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.598412 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.701425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.701479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.701489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.701511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.701526 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.804445 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.804517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.804536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.804568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.804590 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.866035 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:11:07.994332157 +0000 UTC Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.891942 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:32 crc kubenswrapper[4787]: E0219 19:19:32.892159 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.908480 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.908561 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.908581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.908635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.908658 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:32Z","lastTransitionTime":"2026-02-19T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.911356 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:32Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.937834 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:32Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.958364 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:32Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.981943 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:32Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:32 crc kubenswrapper[4787]: I0219 19:19:32.997191 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:32Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.012240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.012329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.012412 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.012481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.012506 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.014574 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.031542 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.060658 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.085750 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.104487 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.115933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.115978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.115990 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.116003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.116067 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.127067 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.142230 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.158419 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.172045 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.186277 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.198527 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.214378 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:33Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.222774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.222827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.222836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.222856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.222870 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.325926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.326023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.326054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.326092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.326121 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.429161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.429221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.429234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.429255 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.429269 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.533041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.533116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.533138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.533178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.533197 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.582035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:33 crc kubenswrapper[4787]: E0219 19:19:33.582233 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:33 crc kubenswrapper[4787]: E0219 19:19:33.582303 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:19:37.582285043 +0000 UTC m=+45.372950985 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.635814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.635863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.635872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.635886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.635897 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.738709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.738758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.738769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.738788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.738797 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.841503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.841557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.841574 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.841597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.841640 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.866229 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:15:57.173741599 +0000 UTC Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.891214 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.891255 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:33 crc kubenswrapper[4787]: E0219 19:19:33.892054 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.891305 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:33 crc kubenswrapper[4787]: E0219 19:19:33.892153 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:33 crc kubenswrapper[4787]: E0219 19:19:33.892263 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.944896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.944950 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.944962 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.944981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:33 crc kubenswrapper[4787]: I0219 19:19:33.945001 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:33Z","lastTransitionTime":"2026-02-19T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.047481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.047548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.047573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.047602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.047675 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.150393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.150444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.150459 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.150480 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.150496 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.253644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.253694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.253709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.253726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.253740 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.358192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.358284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.358301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.358329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.358352 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.462165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.462231 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.462246 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.462266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.462279 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.565885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.565971 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.566005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.566042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.566068 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.668980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.669057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.669076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.669106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.669129 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.771494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.771559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.771571 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.771595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.771634 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.866999 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:29:56.224693526 +0000 UTC Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.875504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.875564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.875584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.875632 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.875646 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.892024 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:34 crc kubenswrapper[4787]: E0219 19:19:34.892475 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.892904 4787 scope.go:117] "RemoveContainer" containerID="7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.979107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.979183 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.979201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.979226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:34 crc kubenswrapper[4787]: I0219 19:19:34.979243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:34Z","lastTransitionTime":"2026-02-19T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.082492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.082543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.082557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.082579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.082592 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.185429 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.185484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.185498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.185521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.185540 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.289275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.289342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.289365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.289394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.289413 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.392676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.392734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.392746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.392768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.392781 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.496399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.496466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.496484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.496510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.496527 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.599303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.599379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.599428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.599457 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.599472 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.620208 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.622277 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.622959 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.643602 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.671809 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.693652 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.702709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.702767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.702780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.702802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.702816 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.709807 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.726572 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.742960 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.764287 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.786058 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.803097 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.805602 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.805662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.805672 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.805693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.805706 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.819962 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.842951 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.855646 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.867989 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:54:46.825197676 +0000 UTC Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.869514 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.891303 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.891390 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.891454 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.891388 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: E0219 19:19:35.891598 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:35 crc kubenswrapper[4787]: E0219 19:19:35.891691 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:35 crc kubenswrapper[4787]: E0219 19:19:35.891773 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.906748 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.908927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.908971 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.908987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.909011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.909024 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:35Z","lastTransitionTime":"2026-02-19T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.922040 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:35 crc kubenswrapper[4787]: I0219 19:19:35.936128 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.012649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.012691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.012707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.012727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.012739 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.115812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.115868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.115878 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.115898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.115910 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.219727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.219779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.219795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.219815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.219830 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.322979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.323049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.323069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.323100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.323126 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.428068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.428827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.428907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.428957 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.428990 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.533319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.533384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.533403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.533432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.533453 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.637013 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.637056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.637070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.637090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.637103 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.740797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.740865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.740888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.740920 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.740945 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.844061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.844112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.844124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.844141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.844156 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.868388 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:06:55.729389873 +0000 UTC Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.890954 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:36 crc kubenswrapper[4787]: E0219 19:19:36.891117 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.946723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.946796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.946807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.946824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:36 crc kubenswrapper[4787]: I0219 19:19:36.946902 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:36Z","lastTransitionTime":"2026-02-19T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.050014 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.050085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.050103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.050127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.050145 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.154225 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.154279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.154291 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.154309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.154328 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.257250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.257341 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.257369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.257408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.257435 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.360045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.360115 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.360127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.360175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.360190 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.462951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.462991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.463020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.463036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.463046 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.565968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.566069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.566085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.566108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.566124 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.632781 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:37 crc kubenswrapper[4787]: E0219 19:19:37.633114 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:37 crc kubenswrapper[4787]: E0219 19:19:37.633250 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:19:45.633207727 +0000 UTC m=+53.423873719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.669019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.669104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.669117 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.669176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.669192 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.772396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.772443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.772475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.772490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.772502 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.868922 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:41:39.809359426 +0000 UTC Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.875034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.875083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.875095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.875113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.875125 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.891717 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.891736 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.891927 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:37 crc kubenswrapper[4787]: E0219 19:19:37.891999 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:37 crc kubenswrapper[4787]: E0219 19:19:37.892199 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:37 crc kubenswrapper[4787]: E0219 19:19:37.892429 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.977177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.977216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.977244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.977259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:37 crc kubenswrapper[4787]: I0219 19:19:37.977268 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:37Z","lastTransitionTime":"2026-02-19T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.080455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.080529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.080545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.080563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.080575 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.184206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.184265 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.184283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.184306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.184320 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.288228 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.288285 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.288297 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.288315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.288328 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.391643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.391695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.391705 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.391721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.391731 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.494167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.494218 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.494230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.494249 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.494262 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.598066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.598133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.598143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.598165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.598179 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.702904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.702958 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.702968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.702986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.703006 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.807207 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.807268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.807285 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.807315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.807335 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.869547 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:33:02.833833224 +0000 UTC Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.891309 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:38 crc kubenswrapper[4787]: E0219 19:19:38.891511 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.911130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.911172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.911196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.911211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:38 crc kubenswrapper[4787]: I0219 19:19:38.911222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:38Z","lastTransitionTime":"2026-02-19T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.014443 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.014536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.014572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.014649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.014683 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.118461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.118522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.118536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.118556 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.118569 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.220872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.220922 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.220935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.220954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.220968 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.323497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.323543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.323554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.323575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.323587 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.458147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.458208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.458223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.458248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.458266 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.560726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.560773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.560785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.560804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.560819 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.663822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.663903 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.663919 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.663938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.663951 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.766758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.766809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.766822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.766846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.766864 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.869690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.869748 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.869761 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.869786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.869802 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.869837 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:06:52.706005948 +0000 UTC Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.891060 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.891173 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:39 crc kubenswrapper[4787]: E0219 19:19:39.891242 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.891259 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:39 crc kubenswrapper[4787]: E0219 19:19:39.891407 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:39 crc kubenswrapper[4787]: E0219 19:19:39.891669 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.972949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.972991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.973002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.973016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:39 crc kubenswrapper[4787]: I0219 19:19:39.973026 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:39Z","lastTransitionTime":"2026-02-19T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.076841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.076888 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.076900 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.076918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.076929 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.125543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.125659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.125681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.125715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.125740 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.142552 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:40Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.147735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.147854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.147880 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.147907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.147921 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.168950 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:40Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.173636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.173669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.173679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.173697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.173709 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.193349 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:40Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.198353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.198397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.198411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.198433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.198449 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.224218 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:40Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.231018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.231089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.231102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.231123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.231139 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.246833 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:40Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.247073 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.249293 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.249354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.249369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.249391 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.249427 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.353590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.353662 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.353673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.353695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.353709 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.459025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.459148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.459160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.459188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.459201 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.562930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.562974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.562985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.563001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.563011 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.666284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.666383 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.666412 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.666441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.666465 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.769834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.769909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.769928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.769954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.769971 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.870841 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:20:25.391318097 +0000 UTC Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.873823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.873912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.873934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.873964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.873988 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.891592 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:40 crc kubenswrapper[4787]: E0219 19:19:40.891835 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.978038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.978113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.978126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.978146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:40 crc kubenswrapper[4787]: I0219 19:19:40.978157 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:40Z","lastTransitionTime":"2026-02-19T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.080556 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.080634 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.080656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.080683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.080697 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:41Z","lastTransitionTime":"2026-02-19T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.514709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.514800 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.514822 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.514859 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.514881 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:41Z","lastTransitionTime":"2026-02-19T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.617023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.617093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.617106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.617121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.617132 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:41Z","lastTransitionTime":"2026-02-19T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.719125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.719166 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.719175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.719192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.719201 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:41Z","lastTransitionTime":"2026-02-19T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.822082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.822129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.822140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.822165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.822180 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:41Z","lastTransitionTime":"2026-02-19T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.871732 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:45:28.781036004 +0000 UTC Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.891103 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.891220 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.891220 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:41 crc kubenswrapper[4787]: E0219 19:19:41.891362 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:41 crc kubenswrapper[4787]: E0219 19:19:41.892218 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:41 crc kubenswrapper[4787]: E0219 19:19:41.892167 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.925211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.925253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.925263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.925279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:41 crc kubenswrapper[4787]: I0219 19:19:41.925291 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:41Z","lastTransitionTime":"2026-02-19T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.028696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.028794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.028806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.028823 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.028837 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.131756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.131802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.131814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.131832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.131843 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.234695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.234732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.234742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.234755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.234764 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.337581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.337689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.337701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.337719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.337730 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.440659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.440709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.440722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.440752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.440765 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.543641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.543701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.543714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.543735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.543749 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.648380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.648437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.648449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.648474 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.648489 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.751019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.751063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.751072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.751086 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.751095 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.853767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.853827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.853858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.853876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.853886 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.872670 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:05:39.83314859 +0000 UTC Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.891110 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:42 crc kubenswrapper[4787]: E0219 19:19:42.891326 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.892066 4787 scope.go:117] "RemoveContainer" containerID="3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.905376 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.918021 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.931786 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.948228 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.957524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.957578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.957593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.957629 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.957644 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:42Z","lastTransitionTime":"2026-02-19T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.964913 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.979151 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:42 crc kubenswrapper[4787]: I0219 19:19:42.995077 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:42Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.010398 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.023524 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.035985 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.050061 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.059681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.059743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.059755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.059774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.059787 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.065766 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.082482 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.105385 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.118216 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.131297 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.153730 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.163132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.163190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.163203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.163222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.163234 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.265592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.265654 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.265665 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.265681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.265692 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.373508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.373545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.373583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.373599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.373624 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.476635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.476674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.476682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.476698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.476707 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.578797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.578847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.578858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.578874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.578886 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.653748 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/1.log" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.657690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.658209 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.680641 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.681649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.681685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.681704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.681723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.681735 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.700396 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.715244 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.726798 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.740141 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.755982 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.770170 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.783388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.783423 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.783432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.783446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.783455 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.785806 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.799982 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.814950 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.829909 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.840953 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.857475 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.871698 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.873721 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:29:42.051309757 +0000 UTC Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.882815 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.885537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.885576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.885586 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.885619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.885631 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.890774 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:43 crc kubenswrapper[4787]: E0219 19:19:43.890869 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.890777 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:43 crc kubenswrapper[4787]: E0219 19:19:43.890951 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.890773 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:43 crc kubenswrapper[4787]: E0219 19:19:43.891023 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.896657 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.911718 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.987547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.987587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.987595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.987625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:43 crc kubenswrapper[4787]: I0219 19:19:43.987638 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:43Z","lastTransitionTime":"2026-02-19T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.090214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.090262 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.090275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.090292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.090303 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.192924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.192972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.192984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.193001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.193013 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.295283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.295385 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.295397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.295416 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.295435 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.397555 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.397590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.397599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.397671 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.397686 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.500571 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.500636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.500646 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.500661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.500672 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.604670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.604738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.604753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.604773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.604789 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.664173 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/2.log" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.665224 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/1.log" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.668716 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492" exitCode=1 Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.668785 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.668842 4787 scope.go:117] "RemoveContainer" containerID="3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.669456 4787 scope.go:117] "RemoveContainer" containerID="3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492" Feb 19 19:19:44 crc kubenswrapper[4787]: E0219 19:19:44.669659 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.686283 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.700443 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.708083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.708121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.708134 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.708155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.708168 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.716871 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.731803 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.746969 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.760810 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.773241 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.789404 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.804967 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.810295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.810487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.810572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.810711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.810813 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.820218 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.836534 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.853996 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.872095 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.874228 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:51:04.763108788 +0000 UTC Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.890384 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.891310 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:44 crc kubenswrapper[4787]: E0219 19:19:44.891504 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.904851 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.913260 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.913803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.914145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.914381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.914551 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:44Z","lastTransitionTime":"2026-02-19T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.933917 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:44 crc kubenswrapper[4787]: I0219 19:19:44.959339 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d4442d44e6fa36610089b04a15a383e31ae749f947712df48fe55174b75d682\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"message\\\":\\\":28.735829 6257 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 19:19:28.736192 6257 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.736514 6257 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.737264 6257 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0219 19:19:28.737462 6257 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 19:19:28.738313 6257 factory.go:656] Stopping watch factory\\\\nI0219 19:19:28.752708 6257 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0219 19:19:28.752877 6257 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0219 19:19:28.753056 6257 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:19:28.753095 6257 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:19:28.753276 6257 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:44Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.017417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.017472 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.017481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.017521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.017531 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.121166 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.121252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.121273 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.121306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.121329 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.225001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.225061 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.225080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.225109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.225131 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.328890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.328942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.328954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.328975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.328989 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.431544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.431683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.431714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.431743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.431763 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.534250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.534725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.534738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.534758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.534771 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.637684 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.637742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.637756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.637776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.637791 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.658845 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.658917 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.658950 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.658971 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.658993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659112 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659167 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:20:17.65915253 +0000 UTC m=+85.449818472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659200 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659224 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659279 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659353 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:20:01.659328515 +0000 UTC m=+69.449994537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659249 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659390 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:20:17.659369817 +0000 UTC m=+85.450035759 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659401 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659406 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659454 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:20:17.659443999 +0000 UTC m=+85.450109941 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659468 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659498 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.659684 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:20:17.659602063 +0000 UTC m=+85.450268045 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.676882 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/2.log" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.681894 4787 scope.go:117] "RemoveContainer" containerID="3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492" Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.682059 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.701158 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.715852 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.742596 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.746234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.746309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.746324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.746380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.746396 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.759922 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.760211 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:20:17.760167901 +0000 UTC m=+85.550833843 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.761627 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.775427 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.793339 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.811796 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.823961 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.839994 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.849679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.849731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.849742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.849764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.849774 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.856865 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.874912 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:24:57.303089801 +0000 UTC Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.881891 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.891907 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.891945 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.892009 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.892044 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.892107 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:45 crc kubenswrapper[4787]: E0219 19:19:45.892185 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.905493 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.918355 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.931650 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.946660 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.952968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.953025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.953039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.953059 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.953073 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:45Z","lastTransitionTime":"2026-02-19T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.965217 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:45 crc kubenswrapper[4787]: I0219 19:19:45.977518 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.055069 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.055161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.055172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.055186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.055197 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.157424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.157461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.157471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.157483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.157493 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.261318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.261950 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.261988 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.262009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.262022 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.364270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.364306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.364323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.364343 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.364355 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.466439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.466498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.466516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.466540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.466557 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.569205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.569241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.569253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.569269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.569280 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.672715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.672776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.672803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.672841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.672865 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.775663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.775697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.775706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.775721 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.775735 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.875664 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:21:24.142509765 +0000 UTC Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.878892 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.878942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.878958 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.878983 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.878999 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.891828 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:46 crc kubenswrapper[4787]: E0219 19:19:46.892001 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.986512 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.986557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.986567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.986582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:46 crc kubenswrapper[4787]: I0219 19:19:46.986593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:46Z","lastTransitionTime":"2026-02-19T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.090238 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.090309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.090328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.090356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.090373 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.193390 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.193439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.193451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.193471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.193487 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.296590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.296675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.296695 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.296720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.296736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.399674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.399742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.399759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.399794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.399821 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.503586 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.503677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.503693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.503717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.503733 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.606835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.606894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.606914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.606937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.606954 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.711152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.711230 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.711257 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.711284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.711305 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.813830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.813883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.813896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.813918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.813934 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.876573 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:11:01.604960533 +0000 UTC Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.891075 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.891075 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.891217 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:47 crc kubenswrapper[4787]: E0219 19:19:47.891365 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:47 crc kubenswrapper[4787]: E0219 19:19:47.891472 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:47 crc kubenswrapper[4787]: E0219 19:19:47.891548 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.916690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.916742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.916756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.916775 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:47 crc kubenswrapper[4787]: I0219 19:19:47.916787 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:47Z","lastTransitionTime":"2026-02-19T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.020384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.020426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.020434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.020450 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.020462 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.123979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.124036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.124045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.124068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.124081 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.227700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.227782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.227807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.227833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.227852 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.332065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.332128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.332143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.332170 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.332189 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.435717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.435807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.435851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.435885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.435905 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.539862 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.539935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.539956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.539987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.540008 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.644294 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.644356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.644371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.644392 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.644406 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.747833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.747901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.747924 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.747953 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.747971 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.843381 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.850726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.850778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.850788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.850809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.850820 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.857859 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.869194 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.877645 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:15:53.479932184 +0000 UTC Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.891414 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.892111 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:48 crc kubenswrapper[4787]: E0219 19:19:48.892389 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.905336 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.918009 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.936417 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.953525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.953576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.953586 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.953617 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.953630 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:48Z","lastTransitionTime":"2026-02-19T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.954908 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.972291 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:48 crc kubenswrapper[4787]: I0219 19:19:48.984514 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.000166 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.014901 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.026953 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.038236 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.055820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.055893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.055904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.055919 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.055929 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.060875 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.078736 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.093073 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.108789 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.126519 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.158436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.158505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.158518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.158534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.158546 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.261125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.261444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.261526 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.261634 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.261716 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.364519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.364556 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.364566 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.364581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.364594 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.467325 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.467365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.467374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.467388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.467398 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.569357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.569404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.569418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.569435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.569446 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.671659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.671699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.671709 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.671786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.671798 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.774655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.774729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.774747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.774778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.774797 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.877591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.877806 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.877828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.877843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.877801 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 20:25:57.524876275 +0000 UTC Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.877853 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.891438 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.891490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.891452 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:49 crc kubenswrapper[4787]: E0219 19:19:49.891675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:49 crc kubenswrapper[4787]: E0219 19:19:49.891772 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:49 crc kubenswrapper[4787]: E0219 19:19:49.891865 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.981449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.981787 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.981866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.981949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:49 crc kubenswrapper[4787]: I0219 19:19:49.982024 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:49Z","lastTransitionTime":"2026-02-19T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.084441 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.084490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.084499 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.084513 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.084522 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.188031 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.188096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.188114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.188143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.188162 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.291311 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.291406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.291425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.291456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.291480 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.394897 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.394985 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.395002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.395030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.395048 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.499052 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.499123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.499142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.499173 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.499195 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.515257 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.515318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.515333 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.515353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.515365 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.535049 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.541085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.541153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.541173 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.541201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.541223 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.558709 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.564463 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.564686 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.564814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.564925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.565011 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.582896 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.588254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.588315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.588326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.588349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.588362 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.602523 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.608253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.608409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.608435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.608467 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.608485 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.627290 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.627468 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.629261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.629290 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.629300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.629315 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.629328 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.732301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.732374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.732386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.732402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.732415 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.834437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.834518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.834536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.834579 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.834597 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.878743 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:14:18.285756248 +0000 UTC Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.891275 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:50 crc kubenswrapper[4787]: E0219 19:19:50.891555 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.937027 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.937080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.937097 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.937121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:50 crc kubenswrapper[4787]: I0219 19:19:50.937138 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:50Z","lastTransitionTime":"2026-02-19T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.039659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.039707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.039715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.039735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.039745 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.142222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.142259 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.142268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.142281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.142291 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.245299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.245360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.245378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.245468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.245484 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.348151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.348236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.348250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.348266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.348278 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.412299 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.431976 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.448844 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.451190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.451504 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.451752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.452164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.452346 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.471163 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.487499 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.500199 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.512435 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.525298 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.539087 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.553137 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.554554 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.554584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.554593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.554619 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.554631 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.566942 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.581747 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.598688 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.613055 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.626309 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.640096 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.657004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.657041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.657050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.657088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.657099 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.660201 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.682924 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.698684 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:51Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.760660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.761001 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.761087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.761187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.761278 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.865213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.865583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.865887 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.866108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.866267 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.879686 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:21:25.856341249 +0000 UTC Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.891470 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.891478 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.892098 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:51 crc kubenswrapper[4787]: E0219 19:19:51.892388 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:51 crc kubenswrapper[4787]: E0219 19:19:51.892695 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:51 crc kubenswrapper[4787]: E0219 19:19:51.892811 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.969520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.969955 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.970047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.970140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:51 crc kubenswrapper[4787]: I0219 19:19:51.970223 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:51Z","lastTransitionTime":"2026-02-19T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.072844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.073121 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.073186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.073248 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.073314 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.176517 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.176563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.176575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.176592 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.176621 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.279076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.279128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.279143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.279161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.279177 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.381957 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.382026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.382041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.382067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.382084 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.484710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.484778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.484790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.484812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.484828 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.588065 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.588155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.588176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.588208 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.588228 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.691125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.691186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.691202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.691220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.691233 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.793661 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.793719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.793731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.793749 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.793762 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.880711 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:26:42.299530617 +0000 UTC Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.890981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:52 crc kubenswrapper[4787]: E0219 19:19:52.891211 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.895785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.895841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.895854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.895870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.895880 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.907570 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:52Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.927037 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:52Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.941091 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:52Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.956692 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:52Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.974004 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:52Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.999043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.999109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.999124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.999180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:52 crc kubenswrapper[4787]: I0219 19:19:52.999198 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:52Z","lastTransitionTime":"2026-02-19T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:52.999836 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:52Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.027385 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.041224 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.052908 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.069860 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.087425 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.102254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.102330 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.102345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.102371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.102385 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.108240 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.123726 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.142466 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.161077 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.180401 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.198641 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.205355 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.205428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.205444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.205475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.205486 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.214100 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:53Z is after 2025-08-24T17:21:41Z" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.308541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.308949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.309090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.309264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.309400 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.413306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.413364 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.413375 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.413396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.413407 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.516352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.516401 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.516411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.516426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.516440 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.618527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.618572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.618585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.618603 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.618655 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.721991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.722049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.722062 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.722089 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.722104 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.825875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.825931 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.825941 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.825960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.825975 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.881541 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:03:21.758036421 +0000 UTC Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.891872 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.891997 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:53 crc kubenswrapper[4787]: E0219 19:19:53.892027 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.892156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:53 crc kubenswrapper[4787]: E0219 19:19:53.892255 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:53 crc kubenswrapper[4787]: E0219 19:19:53.892309 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.929244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.929289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.929301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.929319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:53 crc kubenswrapper[4787]: I0219 19:19:53.929332 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:53Z","lastTransitionTime":"2026-02-19T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.032533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.032597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.032628 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.032654 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.032668 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.135438 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.135489 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.135501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.135523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.135540 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.238929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.239537 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.239682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.239791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.239878 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.342796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.343109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.343173 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.343233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.343294 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.445693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.445730 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.445739 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.445751 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.445762 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.548141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.548190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.548202 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.548220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.548233 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.651692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.651753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.651771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.651791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.651806 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.755004 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.755068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.755080 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.755104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.755118 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.861677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.861846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.861863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.861890 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.861907 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.881801 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 01:55:50.51227252 +0000 UTC Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.891266 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:54 crc kubenswrapper[4787]: E0219 19:19:54.891466 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.964638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.964687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.964720 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.964738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:54 crc kubenswrapper[4787]: I0219 19:19:54.964750 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:54Z","lastTransitionTime":"2026-02-19T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.078983 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.079016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.079024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.079037 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.079045 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.180885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.180918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.180926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.180938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.180947 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.283670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.283698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.283707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.283719 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.283728 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.386033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.386081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.386090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.386102 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.386110 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.488738 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.488772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.488783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.488815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.488827 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.592551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.592599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.592623 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.592641 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.592653 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.694991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.695040 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.695049 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.695063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.695073 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.798956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.799006 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.799020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.799043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.799058 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.882397 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:58:21.31306523 +0000 UTC Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.891448 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:55 crc kubenswrapper[4787]: E0219 19:19:55.891639 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.891716 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:55 crc kubenswrapper[4787]: E0219 19:19:55.892094 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.892217 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:55 crc kubenswrapper[4787]: E0219 19:19:55.892290 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.902425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.902456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.902466 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.902482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:55 crc kubenswrapper[4787]: I0219 19:19:55.902492 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:55Z","lastTransitionTime":"2026-02-19T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.004852 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.004892 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.004904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.004921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.004934 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.108473 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.108525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.108538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.108558 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.108571 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.211552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.211630 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.211649 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.211676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.211694 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.315011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.315082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.315101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.315130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.315154 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.418837 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.418909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.418925 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.418951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.418965 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.521795 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.521851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.521865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.521886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.521908 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.625297 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.625348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.625362 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.625379 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.625392 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.727585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.727648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.727659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.727674 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.727685 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.829847 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.829916 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.829930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.829954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.829967 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.883382 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:27:07.173961779 +0000 UTC Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.891766 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:56 crc kubenswrapper[4787]: E0219 19:19:56.891988 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.933205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.933252 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.933264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.933283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:56 crc kubenswrapper[4787]: I0219 19:19:56.933294 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:56Z","lastTransitionTime":"2026-02-19T19:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.036851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.036906 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.036918 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.036937 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.036948 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.141455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.141508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.141519 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.141540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.141556 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.244633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.244673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.244682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.244699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.244709 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.348050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.348101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.348111 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.348169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.348203 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.450595 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.450690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.450700 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.450716 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.450725 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.553090 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.553167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.553178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.553195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.553206 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.655351 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.655402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.655410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.655425 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.655454 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.757977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.758024 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.758034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.758051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.758061 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.860727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.860781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.860792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.860812 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.860824 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.884354 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:20:42.192907152 +0000 UTC Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.891632 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:57 crc kubenswrapper[4787]: E0219 19:19:57.891780 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.892004 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:57 crc kubenswrapper[4787]: E0219 19:19:57.892055 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.892165 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:57 crc kubenswrapper[4787]: E0219 19:19:57.892212 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.963548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.963633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.963645 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.963663 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:57 crc kubenswrapper[4787]: I0219 19:19:57.963674 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:57Z","lastTransitionTime":"2026-02-19T19:19:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.067119 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.067177 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.067189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.067210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.067226 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.170570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.170656 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.170669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.170691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.170706 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.273776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.273926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.273942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.274525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.274555 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.379696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.379989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.379999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.380012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.380024 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.482848 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.482894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.482904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.482921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.482934 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.585693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.585742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.585791 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.585811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.585825 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.688263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.688338 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.688355 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.688381 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.688396 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.791797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.791854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.791872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.791894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.791908 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.884896 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:44:26.512049718 +0000 UTC Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.891512 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:19:58 crc kubenswrapper[4787]: E0219 19:19:58.891795 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.893907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.893962 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.893975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.893996 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.894020 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.996912 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.996968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.996984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.997003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:58 crc kubenswrapper[4787]: I0219 19:19:58.997020 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:58Z","lastTransitionTime":"2026-02-19T19:19:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.100697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.100754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.100770 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.100794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.100810 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.203436 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.203491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.203506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.203521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.203531 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.306521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.306590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.306618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.306647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.306665 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.410277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.410368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.410380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.410403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.410416 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.513280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.513341 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.513354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.513374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.513388 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.618714 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.618788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.618803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.618832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.618844 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.721488 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.721550 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.721564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.721584 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.721598 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.824442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.824477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.824486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.824498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.824512 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.885037 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:06:50.673571779 +0000 UTC Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.891466 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.891516 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.891495 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:19:59 crc kubenswrapper[4787]: E0219 19:19:59.891631 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:19:59 crc kubenswrapper[4787]: E0219 19:19:59.891722 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:19:59 crc kubenswrapper[4787]: E0219 19:19:59.892094 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.892399 4787 scope.go:117] "RemoveContainer" containerID="3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492" Feb 19 19:19:59 crc kubenswrapper[4787]: E0219 19:19:59.892697 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.927336 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.927432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.927449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.927470 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:19:59 crc kubenswrapper[4787]: I0219 19:19:59.927480 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:19:59Z","lastTransitionTime":"2026-02-19T19:19:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.030360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.030415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.030428 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.030448 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.030461 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.133172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.133254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.133277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.133301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.133315 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.236484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.236547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.236558 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.236578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.236624 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.340422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.340477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.340490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.340511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.340547 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.442923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.442975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.442986 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.443007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.443019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.545972 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.546017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.546029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.546050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.546063 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.649105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.649158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.649174 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.649189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.649203 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.753560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.753655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.753669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.753687 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.753699 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.855814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.855851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.855868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.855889 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.855902 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.870714 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:00Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.876456 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.876570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.876638 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.876748 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.876786 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.886009 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:38:37.26916978 +0000 UTC Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.891685 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.891862 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.893249 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:00Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.897987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.898019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.898030 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.898054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.898066 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.910724 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:00Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.914932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.915005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.915018 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.915036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.915049 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.932784 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:00Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.939596 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.939667 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.939680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.939704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.939719 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.954061 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:00Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:00 crc kubenswrapper[4787]: E0219 19:20:00.954232 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.956169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.956249 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.956263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.956283 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:00 crc kubenswrapper[4787]: I0219 19:20:00.956298 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:00Z","lastTransitionTime":"2026-02-19T19:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.058767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.058814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.058824 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.058843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.058863 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.172405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.172468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.172481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.172507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.172524 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.276244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.276279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.276287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.276301 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.276310 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.378515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.378548 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.378557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.378572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.378581 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.480718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.480769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.480782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.480799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.480812 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.583819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.583858 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.583869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.583885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.583895 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.659850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:01 crc kubenswrapper[4787]: E0219 19:20:01.660021 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:20:01 crc kubenswrapper[4787]: E0219 19:20:01.660125 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:20:33.66010295 +0000 UTC m=+101.450768882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.686710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.686753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.686763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.686780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.686791 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.789477 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.789523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.789531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.789546 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.789557 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.886473 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:03:39.856183764 +0000 UTC Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.890791 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.890840 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:01 crc kubenswrapper[4787]: E0219 19:20:01.890930 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:01 crc kubenswrapper[4787]: E0219 19:20:01.891008 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.890848 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:01 crc kubenswrapper[4787]: E0219 19:20:01.891449 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.902916 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.902968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.902979 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.902997 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:01 crc kubenswrapper[4787]: I0219 19:20:01.903008 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:01Z","lastTransitionTime":"2026-02-19T19:20:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.013558 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.013631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.013644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.013665 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.013678 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.116627 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.116697 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.116711 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.116735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.116752 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.219865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.219932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.219947 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.219974 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.219990 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.322742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.322807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.322821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.322843 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.322857 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.424883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.424948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.424965 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.424989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.425004 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.530881 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.530927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.530936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.530955 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.530965 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.633834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.633894 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.633917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.633949 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.633975 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.737196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.737261 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.737281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.737374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.737395 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.839969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.840023 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.840044 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.840063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.840077 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.886744 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:57:04.689676922 +0000 UTC Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.891252 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:02 crc kubenswrapper[4787]: E0219 19:20:02.891548 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.916318 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:02Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.939886 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:02Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.942564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.942622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.942634 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.942651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.942661 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:02Z","lastTransitionTime":"2026-02-19T19:20:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.956109 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:02Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.970575 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:02Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:02 crc kubenswrapper[4787]: I0219 19:20:02.984950 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:02Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.000652 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:02Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.016853 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.032552 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.044901 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.044963 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.044973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.044990 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.045003 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.048051 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.060769 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.076465 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.091720 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.108232 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.129670 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.148008 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.150222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.150295 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.150307 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.150326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.150337 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.164458 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.178689 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.194552 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.253722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.253771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.253802 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.253819 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.253831 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.357036 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.357204 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.357221 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.357241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.357255 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.459653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.459728 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.459750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.459779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.459799 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.562945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.562996 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.563012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.563035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.563050 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.666329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.666395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.666412 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.666435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.666449 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.747418 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/0.log" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.747494 4787 generic.go:334] "Generic (PLEG): container finished" podID="f0706129-aa73-40ed-899f-02882ed5a4cc" containerID="ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933" exitCode=1 Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.747552 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerDied","Data":"ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.748315 4787 scope.go:117] "RemoveContainer" containerID="ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.767971 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.769303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.769335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.769349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.769367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.769389 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.780835 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.799347 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"2026-02-19T19:19:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b\\\\n2026-02-19T19:19:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b to /host/opt/cni/bin/\\\\n2026-02-19T19:19:18Z [verbose] multus-daemon started\\\\n2026-02-19T19:19:18Z [verbose] Readiness Indicator file check\\\\n2026-02-19T19:20:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.816864 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.832523 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.854836 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.868761 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.871178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.871205 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.871219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.871237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.871247 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.884560 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.887382 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:29:41.598617339 +0000 UTC Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.892893 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.892995 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.893022 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:03 crc kubenswrapper[4787]: E0219 19:20:03.893024 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:03 crc kubenswrapper[4787]: E0219 19:20:03.893212 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:03 crc kubenswrapper[4787]: E0219 19:20:03.893289 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.906804 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.923767 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.940345 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.955772 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.967931 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.973538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.973681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.973762 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.973833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.973890 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:03Z","lastTransitionTime":"2026-02-19T19:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.983422 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:03 crc kubenswrapper[4787]: I0219 19:20:03.996358 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:03Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.010299 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.027131 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.042153 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.076318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.076368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.076378 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.076397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.076408 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.179461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.179893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.180156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.180570 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.180897 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.283898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.284268 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.284396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.284633 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.284726 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.388403 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.388458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.388468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.388486 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.388496 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.490827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.490872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.490881 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.490898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.490911 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.593433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.593490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.593510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.593531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.593547 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.696247 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.696326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.696340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.696360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.696373 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.752768 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/0.log" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.752839 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerStarted","Data":"f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.770210 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.785921 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.799893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.799996 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.800015 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.800041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.800059 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.801552 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.822453 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.837803 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.853138 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.871689 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.883406 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.888417 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:38:58.300088992 +0000 UTC Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.891868 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:04 crc kubenswrapper[4787]: E0219 19:20:04.892031 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.897014 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.902239 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.902281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.902292 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.902313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.902328 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:04Z","lastTransitionTime":"2026-02-19T19:20:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.910005 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.923698 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"2026-02-19T19:19:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b\\\\n2026-02-19T19:19:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b to /host/opt/cni/bin/\\\\n2026-02-19T19:19:18Z [verbose] multus-daemon started\\\\n2026-02-19T19:19:18Z [verbose] Readiness Indicator file check\\\\n2026-02-19T19:20:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.939086 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.952372 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.967284 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.980640 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:04 crc kubenswrapper[4787]: I0219 19:20:04.991828 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:04Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.005112 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.005149 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.005161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.005179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.005192 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.014623 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:05Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.035698 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:05Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.108270 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.108324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.108335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.108352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.108364 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.211768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.211820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.211834 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.211854 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.211864 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.314437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.314529 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.314546 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.314564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.314578 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.417680 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.417734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.417747 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.417767 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.417782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.521127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.521168 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.521176 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.521192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.521202 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.624262 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.624300 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.624312 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.624331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.624344 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.727690 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.727760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.727783 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.727813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.727838 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.830357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.830449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.830475 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.830505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.830521 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.889054 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:28:00.750715715 +0000 UTC Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.891482 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.891529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.891518 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:05 crc kubenswrapper[4787]: E0219 19:20:05.891750 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:05 crc kubenswrapper[4787]: E0219 19:20:05.891870 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:05 crc kubenswrapper[4787]: E0219 19:20:05.892039 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.933763 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.933811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.933821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.933842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:05 crc kubenswrapper[4787]: I0219 19:20:05.933852 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:05Z","lastTransitionTime":"2026-02-19T19:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.036827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.036874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.036886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.036904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.036917 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.140286 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.140340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.140353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.140373 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.140385 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.243481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.243539 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.243551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.243573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.243587 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.346567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.346642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.346659 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.346683 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.346700 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.449857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.449948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.449969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.449999 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.450019 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.552511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.552575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.552599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.552675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.552707 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.656320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.656371 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.656380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.656399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.656410 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.759871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.759921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.759933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.759956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.759969 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.862281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.862324 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.862337 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.862353 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.862362 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.889414 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 20:16:31.933049828 +0000 UTC Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.891902 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:06 crc kubenswrapper[4787]: E0219 19:20:06.892032 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.964870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.964930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.964939 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.964959 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:06 crc kubenswrapper[4787]: I0219 19:20:06.964971 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:06Z","lastTransitionTime":"2026-02-19T19:20:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.068038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.068127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.068137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.068155 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.068164 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.171007 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.171066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.171079 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.171110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.171127 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.274129 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.274209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.274220 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.274236 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.274267 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.376753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.376790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.376808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.376825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.376835 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.480349 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.480420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.480440 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.480468 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.480490 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.583433 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.583479 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.583490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.583508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.583520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.686484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.686523 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.686534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.686553 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.686565 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.789792 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.789825 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.789835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.789849 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.789859 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.890169 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:41:51.02982616 +0000 UTC Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.891003 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:07 crc kubenswrapper[4787]: E0219 19:20:07.891136 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.891333 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:07 crc kubenswrapper[4787]: E0219 19:20:07.891385 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.891495 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:07 crc kubenswrapper[4787]: E0219 19:20:07.891578 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.892470 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.892503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.892515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.892532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.892545 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.995794 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.995857 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.995871 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.995896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:07 crc kubenswrapper[4787]: I0219 19:20:07.995912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:07Z","lastTransitionTime":"2026-02-19T19:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.100051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.100109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.100126 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.100148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.100163 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.202874 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.202926 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.202944 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.202968 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.202984 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.306660 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.306730 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.306743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.306770 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.306782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.409876 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.409928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.409943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.409964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.409978 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.512103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.512152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.512163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.512180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.512195 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.615201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.615245 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.615254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.615271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.615281 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.718070 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.718114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.718125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.718142 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.718153 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.821201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.821269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.821278 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.821314 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.821325 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.891172 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:15:39.838922144 +0000 UTC Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.891369 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:08 crc kubenswrapper[4787]: E0219 19:20:08.891546 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.923670 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.923729 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.923740 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.923758 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:08 crc kubenswrapper[4787]: I0219 19:20:08.923770 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:08Z","lastTransitionTime":"2026-02-19T19:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.026432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.026482 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.026491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.026507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.026532 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.129510 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.129563 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.129575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.129593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.129621 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.232095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.232147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.232160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.232194 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.232208 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.334622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.334715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.334726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.334742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.334752 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.442310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.442368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.442385 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.442405 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.442422 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.545989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.546026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.546034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.546050 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.546061 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.648853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.649227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.649347 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.649437 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.649520 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.752485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.752530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.752543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.752560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.752604 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.855432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.855525 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.855545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.855572 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.855593 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.891348 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:06:03.741112787 +0000 UTC Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.891532 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.891586 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:09 crc kubenswrapper[4787]: E0219 19:20:09.891683 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.891586 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:09 crc kubenswrapper[4787]: E0219 19:20:09.891918 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:09 crc kubenswrapper[4787]: E0219 19:20:09.892065 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.959408 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.959735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.959872 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.959964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:09 crc kubenswrapper[4787]: I0219 19:20:09.960056 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:09Z","lastTransitionTime":"2026-02-19T19:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.063524 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.063642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.063665 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.063696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.063715 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.166485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.166562 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.166582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.166642 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.166664 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.269442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.269552 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.269575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.269655 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.269677 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.372132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.372493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.372567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.372696 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.372789 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.476309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.476788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.477011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.477215 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.477400 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.580071 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.580110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.580118 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.580132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.580142 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.682870 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.682951 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.682975 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.683005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.683027 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.786452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.786520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.786547 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.786582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.786633 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.890505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.890710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.890731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.890759 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.890779 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.891267 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:10 crc kubenswrapper[4787]: E0219 19:20:10.891488 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.892953 4787 scope.go:117] "RemoveContainer" containerID="3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.891510 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:07:27.012505941 +0000 UTC Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.994068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.994143 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.994186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.994214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:10 crc kubenswrapper[4787]: I0219 19:20:10.994231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:10Z","lastTransitionTime":"2026-02-19T19:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.098127 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.098206 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.098227 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.098253 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.098271 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.198785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.198864 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.198882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.198907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.198926 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.215453 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.220131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.220169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.220180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.220197 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.220209 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.232803 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.238068 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.238116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.238130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.238154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.238168 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.251133 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.256917 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.256964 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.256978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.256997 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.257007 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.270883 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.278828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.278875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.278886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.278904 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.278914 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.315141 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.315267 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.317367 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.317422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.317435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.317453 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.317465 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.420898 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.420945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.420954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.420969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.420980 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.523492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.523551 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.523568 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.523590 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.523623 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.626745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.626798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.626815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.626835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.626850 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.729141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.729184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.729196 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.729214 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.729227 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.780584 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/2.log" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.784474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.785211 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.807712 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.826654 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.831266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.831306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.831318 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.831337 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.831350 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.845713 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.859850 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.877730 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"2026-02-19T19:19:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b\\\\n2026-02-19T19:19:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b to /host/opt/cni/bin/\\\\n2026-02-19T19:19:18Z [verbose] multus-daemon started\\\\n2026-02-19T19:19:18Z [verbose] Readiness Indicator file check\\\\n2026-02-19T19:20:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.890857 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.890907 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.890857 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.891131 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.891255 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:11 crc kubenswrapper[4787]: E0219 19:20:11.891381 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.893884 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:26:21.455550931 +0000 UTC Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.900461 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.923111 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.934256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.934310 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.934323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.934342 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.934357 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:11Z","lastTransitionTime":"2026-02-19T19:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.940260 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.954432 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.967539 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.983129 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:11 crc kubenswrapper[4787]: I0219 19:20:11.998917 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.013640 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.026848 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.036490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.036521 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.036534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.036555 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.036567 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.037023 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.049365 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.066981 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.081598 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.139708 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.139772 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.139789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.139809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.139822 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.242233 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.242321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.242344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.242369 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.242388 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.345104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.345140 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.345148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.345162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.345174 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.447081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.447132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.447145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.447163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.447176 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.549929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.549989 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.550009 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.550033 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.550051 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.652685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.652725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.652735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.652750 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.652763 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.755087 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.755145 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.755158 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.755178 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.755204 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.790110 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/3.log" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.791135 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/2.log" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.794685 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" exitCode=1 Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.794735 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.794788 4787 scope.go:117] "RemoveContainer" containerID="3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.795558 4787 scope.go:117] "RemoveContainer" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" Feb 19 19:20:12 crc kubenswrapper[4787]: E0219 19:20:12.795810 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.815045 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.837863 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.857895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.857934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.857944 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.857960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.857972 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.862832 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.890105 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"trics-daemon-cv5f6 before timer (time: 2026-02-19 19:20:12.95879994 +0000 UTC m=+1.759142172): skip\\\\nI0219 19:20:11.790691 6877 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 45.991µs)\\\\nI0219 19:20:11.790987 6877 event.go:377] Event(v1.ObjectReference{Kind:\\\\\\\"Pod\\\\\\\", Namespace:\\\\\\\"openshift-multus\\\\\\\", Name:\\\\\\\"network-metrics-daemon-cv5f6\\\\\\\", UID:\\\\\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-cv5f6: failed to update pod openshift-multus/network-metrics-daemon-cv5f6: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z\\\\nI0219 19:20:11.791036 6877 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 19:20:11.791046 6877 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:20:11.791074 6877 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:20:11.791166 6877 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.891231 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:12 crc kubenswrapper[4787]: E0219 19:20:12.891442 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.894321 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:57:24.024171648 +0000 UTC Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.907318 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.923288 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.938993 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.955861 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.961106 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.961193 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.961216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.961244 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.961263 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:12Z","lastTransitionTime":"2026-02-19T19:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.973297 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:12 crc kubenswrapper[4787]: I0219 19:20:12.989358 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:12Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.002958 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.017213 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.032646 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.051269 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.065352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.065402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.065417 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.065434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.065444 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.069739 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"2026-02-19T19:19:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b\\\\n2026-02-19T19:19:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b to /host/opt/cni/bin/\\\\n2026-02-19T19:19:18Z [verbose] multus-daemon started\\\\n2026-02-19T19:19:18Z [verbose] Readiness Indicator file check\\\\n2026-02-19T19:20:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.087047 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.105508 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.123173 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.140098 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.154417 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.168095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.168137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.168147 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.168164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.168176 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.173906 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"2026-02-19T19:19:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b\\\\n2026-02-19T19:19:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b to /host/opt/cni/bin/\\\\n2026-02-19T19:19:18Z [verbose] multus-daemon started\\\\n2026-02-19T19:19:18Z [verbose] Readiness Indicator file check\\\\n2026-02-19T19:20:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.193357 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.211984 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.232848 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3307e0f1cd93f5879e652184fcf06c2d2dc3fb361e75c1f1eee8909014667492\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:19:43Z\\\",\\\"message\\\":\\\"3, clusterEndpoints:services.lbEndpoints{Port:6443, V4IPs:[]string{\\\\\\\"192.168.126.11\\\\\\\"}, V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0219 19:19:43.855391 6476 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:19:43Z is after 2025-08-24T17:21:41Z]\\\\nI0219 19:19:43.856631 6476 services_controller.go:445] Built service default/kubernetes LB template configs for network=default: []services.lbConfig(nil)\\\\nI0219 19:19:43.856590 6476 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"trics-daemon-cv5f6 before timer (time: 2026-02-19 19:20:12.95879994 +0000 UTC m=+1.759142172): skip\\\\nI0219 19:20:11.790691 6877 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 45.991µs)\\\\nI0219 19:20:11.790987 6877 event.go:377] Event(v1.ObjectReference{Kind:\\\\\\\"Pod\\\\\\\", Namespace:\\\\\\\"openshift-multus\\\\\\\", Name:\\\\\\\"network-metrics-daemon-cv5f6\\\\\\\", UID:\\\\\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-cv5f6: failed to update pod openshift-multus/network-metrics-daemon-cv5f6: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z\\\\nI0219 19:20:11.791036 6877 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 19:20:11.791046 6877 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:20:11.791074 6877 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:20:11.791166 6877 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:20:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.247698 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.257404 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.272435 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.272507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.272522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.272540 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.272552 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.279329 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.292447 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.304536 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.318284 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.336764 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.350709 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.368153 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.374492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.374527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.374536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.374573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.374584 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.387384 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.399323 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.411268 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.476983 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.477017 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.477025 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.477039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.477050 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.579138 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.579190 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.579200 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.579217 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.579227 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.682157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.682210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.682222 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.682241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.682255 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.785109 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.785150 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.785160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.785175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.785186 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.800933 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/3.log" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.805280 4787 scope.go:117] "RemoveContainer" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" Feb 19 19:20:13 crc kubenswrapper[4787]: E0219 19:20:13.805675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.837300 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a09bfcea-176d-461f-9881-782e58b60f71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://957f6f4eda1844b90fd979bd072de9555ee4250c2f58b5f6add9b41775803acd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fb69e949db14b7f7b446d049bc7bdb22736bcbe1f30f5020d97ddd7132c8b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91379ef020ba1de7cffdbcd082d999b8de463d8456d52d2c787805d4f262c063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27084d85fe6942b66a4d695dafac45b1007611a79c286753383d82bb62b4414\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da1dfa9b5b98ed73088df3a5bf8cb7547f9ace9a93b12122af5096049d9f4d4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f24e31db0846b6828f4305ace7c5b3d6bdcc87661aca5b9c33c933d4d9c6a6c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b623f223a77d40500bc4bad73761b30e680ef2136a7c0e72dffa0cb22b3dc8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be700adb5dfedd8754915f573c79d5f111d1c265aea9d7217321f2cb0341481f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.864419 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4989ff60-0c48-4f78-bcf6-2d394ee929fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:11Z\\\",\\\"message\\\":\\\"trics-daemon-cv5f6 before timer (time: 2026-02-19 19:20:12.95879994 +0000 UTC m=+1.759142172): skip\\\\nI0219 19:20:11.790691 6877 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 45.991µs)\\\\nI0219 19:20:11.790987 6877 event.go:377] Event(v1.ObjectReference{Kind:\\\\\\\"Pod\\\\\\\", Namespace:\\\\\\\"openshift-multus\\\\\\\", Name:\\\\\\\"network-metrics-daemon-cv5f6\\\\\\\", UID:\\\\\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\\\\\", APIVersion:\\\\\\\"v1\\\\\\\", ResourceVersion:\\\\\\\"26911\\\\\\\", FieldPath:\\\\\\\"\\\\\\\"}): type: 'Warning' reason: 'ErrorAddingResource' addLogicalPort failed for openshift-multus/network-metrics-daemon-cv5f6: failed to update pod openshift-multus/network-metrics-daemon-cv5f6: Internal error occurred: failed calling webhook \\\\\\\"pod.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/pod?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:11Z is after 2025-08-24T17:21:41Z\\\\nI0219 19:20:11.791036 6877 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 19:20:11.791046 6877 ovnkube.go:599] Stopped ovnkube\\\\nI0219 19:20:11.791074 6877 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0219 19:20:11.791166 6877 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:20:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r4mht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5xjgd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.888502 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b78cae4-54ac-423d-8d4e-27fadc07d335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cbd15b21a93bfd0b4a098a73f2c923118108bb7cbaeac4e1eab3e0f6569ad06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a19efb6f751f483e2a41366e08babfbc690d323377f45633b6c31bda7b060cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-975jb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfkn2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.888673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.888726 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.888751 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.888785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.888808 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.891302 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:13 crc kubenswrapper[4787]: E0219 19:20:13.891440 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.891452 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:13 crc kubenswrapper[4787]: E0219 19:20:13.891526 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.891642 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:13 crc kubenswrapper[4787]: E0219 19:20:13.891980 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.894994 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:29:10.477901023 +0000 UTC Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.907555 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56f25fce-8c35-4786-94f3-93854459f32a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2dtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cv5f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.928466 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59d17e2b-5c80-41fe-800e-51f505bf04d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18029bfd9e5a13d7fcfb59b4c8d7b3ec8377c964b6a3fe7c0410537171821c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31725abffc2b0154d474c3d92894923e93c3cb99692217d7eedf58930069e569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f1f902b86bb95586e28de07f9912c68e930170f812dec275afdaf668d06c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3ce940a0e200c87c6f3e458575581bb8cc2fbfcbc77c91b76ebeff0a221bcc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.948352 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a732cca-6c31-457d-a0fd-0b8cb8d38bcf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff0502739b0cedb53bc265cc7d917b3627edd9c94e67736b9c513deac6171fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa721767ee702956520a402e24121f90bc99e692b71d2f852ad31db6766ff6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6378208470b089b8535f45b40a75a7d2fad6a427ce2c4a464c4660edd83cc39a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.967212 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d5822f2a232651494b875726dbf21e24bcc3e0438680b27df0f35526c039890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.983298 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ca5378c791866960924553c83bc92709bd49cfb6afc3a0134256050ee76b2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.991846 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.991945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.991959 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.991980 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.991993 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:13Z","lastTransitionTime":"2026-02-19T19:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:13 crc kubenswrapper[4787]: I0219 19:20:13.998522 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:13Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.016882 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.034064 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.049120 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00bdf088-5e51-4d51-9cb1-8e590898482c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e59642a12b386bf31300de8513edfadd7157604830a559951d8d23e3391e4fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sp8fd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wlszq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.063338 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-z4xw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcc0cf1c-007a-4057-b79c-86396b74ca3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0614bf96482bc819426a85905f26f4a992736109cb74dbb1a6f11c6cc4a9048f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wj4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-z4xw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.081211 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b00336-c0de-40ff-ac4e-ab902c952805\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T19:19:13Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 19:19:07.655108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 19:19:07.655852 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-795732858/tls.crt::/tmp/serving-cert-795732858/tls.key\\\\\\\"\\\\nI0219 19:19:13.131876 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 19:19:13.139824 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 19:19:13.139856 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 19:19:13.139882 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 19:19:13.139888 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 19:19:13.146029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 19:19:13.146050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 19:19:13.146060 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146069 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 19:19:13.146075 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 19:19:13.146080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 19:19:13.146085 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 19:19:13.146089 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 19:19:13.148157 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:18:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:18:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:18:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:18:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.095960 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.096047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.096063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.096095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.096120 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.102797 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da271adee13b840fc29081d1b39d6708dd1901a121a09d055c49efddf0704a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f13e909b7f54ab0d1d6dc5baa9ab8d2bb3a33e7b3fbf65594f8732f175d1759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.116269 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44jcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c7b543f-66f3-4657-b0b6-2f47a4a40d40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e9e22ffdb27df7990d6ff9e0301e1b65eb97885685b4cb3136e44bc4be24b2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvsww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44jcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.132826 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qxzkq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0706129-aa73-40ed-899f-02882ed5a4cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T19:20:03Z\\\",\\\"message\\\":\\\"2026-02-19T19:19:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b\\\\n2026-02-19T19:19:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b6c2ff2c-74b1-447f-8e1f-0866038ea73b to /host/opt/cni/bin/\\\\n2026-02-19T19:19:18Z [verbose] multus-daemon started\\\\n2026-02-19T19:19:18Z [verbose] Readiness Indicator file check\\\\n2026-02-19T19:20:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j42sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qxzkq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.151774 4787 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9cgws" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0af035a6-d8a5-4686-b509-ec321548b323\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T19:19:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b2b0cb78a66a0d8906d5244a46f51de48ceb1b209588eb4516bd579138c7784\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66684821ba058287ec5c1326cc40e4994db4a22b5a3c671c1c04724eb3ecc53e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebfb7aea58743894416105aa4f455fcfb4142aefae74c0c0925bc5799aa988b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d93953951d36ea33588878fae02669f13c1d05a3fa976318de351541df21608\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c52852910b2d8a949ebf48febdeb6c90fad32576e345225c44a7b659e097d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b57373321645987b6c5c58489f1af4a4f56c39a7e103272eac75c735e13d5994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369d6fe4187bec8d5d40cde1b1cc794c04a712bf8d92d1e36b2fb41aaec81262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T19:19:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T19:19:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vld82\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T19:19:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9cgws\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:14Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.198386 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.198430 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.198439 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.198454 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.198466 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.301442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.301531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.301542 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.301560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.301574 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.404764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.405167 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.405237 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.405306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.405377 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.508589 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.508681 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.508698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.508722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.508737 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.611718 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.611777 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.611818 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.611842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.611858 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.715074 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.715452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.715534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.715621 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.715706 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.817685 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.817756 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.817766 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.817780 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.817790 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.891088 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:14 crc kubenswrapper[4787]: E0219 19:20:14.891410 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.895503 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:44:37.221187074 +0000 UTC Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.907547 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.921156 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.921209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.921228 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.921250 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:14 crc kubenswrapper[4787]: I0219 19:20:14.921266 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:14Z","lastTransitionTime":"2026-02-19T19:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.025083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.025152 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.025169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.025203 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.025222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.128471 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.128522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.128536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.128559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.128577 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.231264 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.231345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.231370 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.231399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.231418 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.335114 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.335181 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.335198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.335226 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.335243 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.438559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.438635 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.438648 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.438669 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.438687 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.542977 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.543058 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.543082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.543117 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.543143 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.646688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.647223 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.647384 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.647531 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.647987 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.751810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.752242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.752256 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.752273 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.752284 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.856706 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.856773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.856785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.856805 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.856823 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.891135 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.891143 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:15 crc kubenswrapper[4787]: E0219 19:20:15.891754 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.891191 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:15 crc kubenswrapper[4787]: E0219 19:20:15.891936 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:15 crc kubenswrapper[4787]: E0219 19:20:15.892262 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.896376 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:26:45.966229422 +0000 UTC Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.959788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.959832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.959842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.959860 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:15 crc kubenswrapper[4787]: I0219 19:20:15.959872 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:15Z","lastTransitionTime":"2026-02-19T19:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.062484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.062534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.062546 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.062567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.062583 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.165820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.165869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.165883 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.166210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.166227 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.268928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.268982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.268995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.269012 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.269023 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.371593 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.371691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.371704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.371723 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.371740 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.474088 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.474395 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.474491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.474580 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.474794 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.577907 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.578303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.578389 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.578502 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.578591 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.681599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.681688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.681699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.681725 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.681736 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.784120 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.784154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.784164 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.784180 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.784191 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.886966 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.887035 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.887047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.887067 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.887078 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.891624 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:16 crc kubenswrapper[4787]: E0219 19:20:16.891933 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.896464 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:19:41.695404793 +0000 UTC Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.989461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.989930 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.990032 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.990131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:16 crc kubenswrapper[4787]: I0219 19:20:16.990228 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:16Z","lastTransitionTime":"2026-02-19T19:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.093591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.093994 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.094130 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.094235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.094337 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.198710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.199157 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.199322 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.199518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.200257 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.304676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.304779 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.304793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.304820 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.304835 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.409057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.409116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.409131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.409154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.409168 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.512841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.512909 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.512927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.512956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.512974 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.615808 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.615868 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.615880 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.615899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.615912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.659557 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.659674 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.659711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.659746 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.659795 4787 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.659865 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.659879 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.659891 4787 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.659933 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.65990135 +0000 UTC m=+149.450567332 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.659990 4787 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.660026 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.660005583 +0000 UTC m=+149.450671835 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.660173 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.660141177 +0000 UTC m=+149.450807309 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.660550 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.660653 4787 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.660671 4787 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.660756 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.660730873 +0000 UTC m=+149.451396815 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.718995 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.719046 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.719056 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.719072 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.719082 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.760655 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.760848 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.760814983 +0000 UTC m=+149.551480915 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.821186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.821240 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.821254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.821275 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.821288 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.891270 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.891382 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.891454 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.891700 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.891738 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:17 crc kubenswrapper[4787]: E0219 19:20:17.891996 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.897132 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:27:46.344596752 +0000 UTC Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.924348 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.924402 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.924414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.924434 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:17 crc kubenswrapper[4787]: I0219 19:20:17.924446 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:17Z","lastTransitionTime":"2026-02-19T19:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.027955 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.028008 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.028022 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.028041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.028054 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.130790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.130832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.130841 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.130856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.130866 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.233567 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.233653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.233668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.233688 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.233703 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.337133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.337464 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.337564 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.337692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.337783 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.441332 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.441396 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.441406 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.441424 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.441436 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.544885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.544934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.544947 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.544967 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.544981 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.648213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.648789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.648991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.649153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.649331 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.753326 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.753380 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.753397 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.753426 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.753440 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.856446 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.856511 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.856527 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.856550 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.856566 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.891774 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:18 crc kubenswrapper[4787]: E0219 19:20:18.892071 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.897639 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:59:55.973151314 +0000 UTC Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.960321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.960394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.960418 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.960452 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:18 crc kubenswrapper[4787]: I0219 19:20:18.960477 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:18Z","lastTransitionTime":"2026-02-19T19:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.062984 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.063410 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.063538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.063752 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.063892 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.167235 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.167287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.167299 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.167316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.167331 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.270210 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.270274 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.270287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.270306 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.270321 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.373407 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.373813 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.373882 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.373981 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.374067 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.476864 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.476914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.476923 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.476942 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.476955 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.580234 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.580284 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.580296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.580313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.580323 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.683105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.683154 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.683172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.683191 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.683203 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.785934 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.785991 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.786128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.786414 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.786465 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.890776 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:19 crc kubenswrapper[4787]: E0219 19:20:19.891183 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.891003 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:19 crc kubenswrapper[4787]: E0219 19:20:19.891867 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.890941 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:19 crc kubenswrapper[4787]: E0219 19:20:19.892260 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.894144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.894189 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.894198 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.894213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.894222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.897751 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:29:50.796659351 +0000 UTC Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.996702 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.996765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.996778 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.996799 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:19 crc kubenswrapper[4787]: I0219 19:20:19.996816 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:19Z","lastTransitionTime":"2026-02-19T19:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.099676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.099737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.099754 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.099781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.099799 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.202987 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.203019 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.203029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.203045 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.203055 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.305415 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.305481 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.305490 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.305507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.305516 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.408279 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.408321 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.408329 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.408344 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.408353 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.510722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.510776 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.510789 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.510809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.510824 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.613054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.613388 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.613461 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.613534 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.613597 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.716733 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.716796 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.716811 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.716836 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.716849 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.819340 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.819394 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.819404 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.819420 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.819445 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.891300 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:20 crc kubenswrapper[4787]: E0219 19:20:20.891859 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.898580 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:56:41.689489737 +0000 UTC Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.921483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.921544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.921555 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.921573 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:20 crc kubenswrapper[4787]: I0219 19:20:20.921588 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:20Z","lastTransitionTime":"2026-02-19T19:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.024285 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.024341 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.024354 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.024377 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.024391 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.127965 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.128010 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.128020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.128039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.128054 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.230682 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.230730 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.230741 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.230762 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.230778 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.333948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.334003 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.334020 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.334042 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.334053 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.335455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.335508 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.335522 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.335543 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.335555 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.357759 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.363365 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.363422 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.363432 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.363451 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.363462 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.378824 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.382676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.382722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.382735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.382753 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.382764 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.395636 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.400313 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.400352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.400360 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.400374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.400386 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.413545 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.417743 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.417817 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.417827 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.417844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.417855 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.436566 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T19:20:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6db17b6c-86dc-4c6b-a0c1-ee45005d3057\\\",\\\"systemUUID\\\":\\\"b30ba7af-b2e2-44e0-b259-a04a3d082dd3\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T19:20:21Z is after 2025-08-24T17:21:41Z" Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.436720 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.438484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.438533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.438545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.438560 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.438572 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.541828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.541895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.541914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.541936 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.541954 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.645038 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.645083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.645093 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.645110 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.645121 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.748736 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.748786 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.748797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.748815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.748825 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.852211 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.852258 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.852269 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.852289 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.852301 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.891532 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.891763 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.891931 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.891924 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.892062 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:21 crc kubenswrapper[4787]: E0219 19:20:21.892417 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.899237 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:51:20.086343535 +0000 UTC Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.955076 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.955124 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.955136 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.955179 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:21 crc kubenswrapper[4787]: I0219 19:20:21.955192 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:21Z","lastTransitionTime":"2026-02-19T19:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.057899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.057940 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.057952 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.057969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.057980 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.161810 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.161873 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.161886 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.161908 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.161922 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.265132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.265582 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.265915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.266146 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.266405 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.370153 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.370494 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.370722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.370910 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.370993 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.474219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.474339 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.474357 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.474374 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.474387 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.577691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.577764 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.577788 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.577809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.577834 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.681026 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.681094 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.681296 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.681319 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.681332 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.783973 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.784016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.784028 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.784043 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.784054 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.886057 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.886144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.886161 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.886192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.886212 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.891529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:22 crc kubenswrapper[4787]: E0219 19:20:22.891753 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.900190 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:34:53.022910901 +0000 UTC Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.938115 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.938088263 podStartE2EDuration="1m7.938088263s" podCreationTimestamp="2026-02-19 19:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:22.935956624 +0000 UTC m=+90.726622566" watchObservedRunningTime="2026-02-19 19:20:22.938088263 +0000 UTC m=+90.728754205" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.990352 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.990431 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.990455 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.990493 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:22 crc kubenswrapper[4787]: I0219 19:20:22.990522 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:22Z","lastTransitionTime":"2026-02-19T19:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.000036 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfkn2" podStartSLOduration=68.999997288 podStartE2EDuration="1m8.999997288s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:22.985130864 +0000 UTC m=+90.775796816" watchObservedRunningTime="2026-02-19 19:20:22.999997288 +0000 UTC m=+90.790663240" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.036759 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.036739502 podStartE2EDuration="35.036739502s" podCreationTimestamp="2026-02-19 19:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.01871318 +0000 UTC m=+90.809379152" watchObservedRunningTime="2026-02-19 19:20:23.036739502 +0000 UTC m=+90.827405444" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.036898 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=63.036892226 podStartE2EDuration="1m3.036892226s" podCreationTimestamp="2026-02-19 19:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.034069298 +0000 UTC m=+90.824735240" watchObservedRunningTime="2026-02-19 19:20:23.036892226 +0000 UTC m=+90.827558168" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.093034 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.093085 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.093096 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.093116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.093130 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.140230 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-z4xw6" podStartSLOduration=70.140207236 podStartE2EDuration="1m10.140207236s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.129107436 +0000 UTC m=+90.919773378" watchObservedRunningTime="2026-02-19 19:20:23.140207236 +0000 UTC m=+90.930873178" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.140464 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.140458673 podStartE2EDuration="9.140458673s" podCreationTimestamp="2026-02-19 19:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.139668601 +0000 UTC m=+90.930334553" watchObservedRunningTime="2026-02-19 19:20:23.140458673 +0000 UTC m=+90.931124615" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.195503 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.195557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.195581 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.195601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.195632 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.201276 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podStartSLOduration=70.201259957 podStartE2EDuration="1m10.201259957s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.20063041 +0000 UTC m=+90.991296352" watchObservedRunningTime="2026-02-19 19:20:23.201259957 +0000 UTC m=+90.991925899" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.237035 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.237006483 podStartE2EDuration="1m9.237006483s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.21928824 +0000 UTC m=+91.009954182" watchObservedRunningTime="2026-02-19 19:20:23.237006483 +0000 UTC m=+91.027672435" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.255276 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-44jcg" podStartSLOduration=70.255244642 podStartE2EDuration="1m10.255244642s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.25376027 +0000 UTC m=+91.044426212" watchObservedRunningTime="2026-02-19 19:20:23.255244642 +0000 UTC m=+91.045910584" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.277131 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qxzkq" podStartSLOduration=70.277107411 podStartE2EDuration="1m10.277107411s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.276921536 +0000 UTC m=+91.067587478" watchObservedRunningTime="2026-02-19 19:20:23.277107411 +0000 UTC m=+91.067773353" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.298054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.298095 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.298108 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.298123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.298134 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.298399 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9cgws" podStartSLOduration=70.298388094 podStartE2EDuration="1m10.298388094s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:23.297281053 +0000 UTC m=+91.087946995" watchObservedRunningTime="2026-02-19 19:20:23.298388094 +0000 UTC m=+91.089054036" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.401175 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.401577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.401587 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.401637 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.401652 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.504866 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.504915 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.504928 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.504945 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.504957 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.607538 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.607575 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.607583 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.607597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.607622 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.709644 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.709707 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.709722 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.709742 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.709756 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.812542 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.812591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.812625 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.812647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.812660 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.891524 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.891629 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:23 crc kubenswrapper[4787]: E0219 19:20:23.891715 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.891651 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:23 crc kubenswrapper[4787]: E0219 19:20:23.891869 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:23 crc kubenswrapper[4787]: E0219 19:20:23.892043 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.900718 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:52:55.969281587 +0000 UTC Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.916119 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.916159 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.916171 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.916187 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:23 crc kubenswrapper[4787]: I0219 19:20:23.916201 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:23Z","lastTransitionTime":"2026-02-19T19:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.018599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.018664 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.018675 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.018691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.018702 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.121304 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.121356 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.121368 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.121393 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.121405 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.223679 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.223732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.223781 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.223798 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.223810 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.326528 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.326576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.326586 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.326624 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.326636 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.429797 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.429835 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.429881 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.429895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.429905 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.537458 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.537507 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.537516 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.537532 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.537547 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.640484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.640536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.640545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.640559 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.640571 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.743141 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.743186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.743195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.743209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.743220 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.845216 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.845277 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.845291 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.845316 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.845332 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.891629 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:24 crc kubenswrapper[4787]: E0219 19:20:24.891900 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.901135 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:22:13.529304739 +0000 UTC Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.948058 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.948105 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.948116 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.948131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:24 crc kubenswrapper[4787]: I0219 19:20:24.948141 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:24Z","lastTransitionTime":"2026-02-19T19:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.051411 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.051465 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.051484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.051520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.051537 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.154082 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.154137 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.154148 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.154165 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.154176 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.256948 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.257081 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.257101 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.257123 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.257138 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.361209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.361287 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.361303 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.361328 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.361349 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.463653 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.463713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.463727 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.463744 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.463757 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.566280 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.566331 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.566345 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.566363 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.566377 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.668535 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.668591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.668618 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.668636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.668648 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.771713 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.771809 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.771831 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.771899 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.771920 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.875144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.875185 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.875195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.875213 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.875226 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.891691 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.891722 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.891705 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:25 crc kubenswrapper[4787]: E0219 19:20:25.891829 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:25 crc kubenswrapper[4787]: E0219 19:20:25.891935 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:25 crc kubenswrapper[4787]: E0219 19:20:25.892419 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.892992 4787 scope.go:117] "RemoveContainer" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" Feb 19 19:20:25 crc kubenswrapper[4787]: E0219 19:20:25.893241 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.902093 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:17:30.634506341 +0000 UTC Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.978647 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.978689 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.978699 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.978712 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:25 crc kubenswrapper[4787]: I0219 19:20:25.978724 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:25Z","lastTransitionTime":"2026-02-19T19:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.081732 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.081790 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.081803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.081821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.081833 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.184815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.184856 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.184877 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.184893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.184902 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.287717 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.287757 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.287768 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.287782 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.287790 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.389755 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.389804 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.389815 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.389833 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.389845 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.492484 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.492533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.492545 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.492578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.492591 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.595601 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.595668 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.595677 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.595693 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.595704 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.699162 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.699201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.699209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.699224 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.699233 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.802063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.802117 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.802128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.802144 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.802154 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.891178 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:26 crc kubenswrapper[4787]: E0219 19:20:26.891385 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.902738 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:47:56.037921992 +0000 UTC Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.904929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.904969 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.904982 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.905002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:26 crc kubenswrapper[4787]: I0219 19:20:26.905014 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:26Z","lastTransitionTime":"2026-02-19T19:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.008219 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.008254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.008263 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.008298 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.008312 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.110769 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.110807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.110814 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.110828 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.110838 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.213188 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.213242 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.213254 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.213271 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.213284 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.322039 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.322091 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.322103 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.322125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.322139 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.425238 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.425272 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.425281 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.425346 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.425366 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.529133 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.529195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.529212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.529241 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.529259 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.631442 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.631485 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.631497 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.631515 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.631525 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.734011 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.734323 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.734399 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.734487 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.734574 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.836698 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.836735 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.836745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.836760 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.836771 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.891396 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.891515 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.891400 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:27 crc kubenswrapper[4787]: E0219 19:20:27.891778 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:27 crc kubenswrapper[4787]: E0219 19:20:27.891543 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:27 crc kubenswrapper[4787]: E0219 19:20:27.891701 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.903653 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:41:49.774128556 +0000 UTC Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.938746 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.938785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.938793 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.938807 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:27 crc kubenswrapper[4787]: I0219 19:20:27.938818 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:27Z","lastTransitionTime":"2026-02-19T19:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.041541 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.041869 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.041938 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.042000 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.042067 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.144409 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.144771 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.144842 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.144933 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.144997 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.247544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.247591 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.247622 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.247643 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.247654 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.350131 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.350172 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.350182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.350201 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.350215 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.452444 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.452483 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.452491 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.452505 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.452516 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.555041 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.555407 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.555520 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.555673 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.555782 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.658631 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.658691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.658701 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.658715 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.658725 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.760850 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.760893 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.760905 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.760921 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.760933 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.862773 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.862831 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.862844 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.862865 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.862879 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.891512 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:28 crc kubenswrapper[4787]: E0219 19:20:28.891993 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.904459 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:27:13.590666796 +0000 UTC Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.965737 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.965774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.965785 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.965801 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:28 crc kubenswrapper[4787]: I0219 19:20:28.965814 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:28Z","lastTransitionTime":"2026-02-19T19:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.069518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.069929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.070047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.070160 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.070231 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.172498 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.172555 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.172577 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.172599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.172636 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.276169 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.276803 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.276832 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.276853 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.276867 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.380132 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.380186 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.380195 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.380212 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.380222 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.483536 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.483636 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.483651 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.483671 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.483684 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.586774 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.586851 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.586863 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.586892 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.586907 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.689266 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.689309 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.689320 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.689335 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.689346 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.792457 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.792506 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.792518 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.792535 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.792545 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.891668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.891775 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:29 crc kubenswrapper[4787]: E0219 19:20:29.891829 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.891668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:29 crc kubenswrapper[4787]: E0219 19:20:29.892063 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:29 crc kubenswrapper[4787]: E0219 19:20:29.892241 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.894654 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.894691 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.894704 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.894724 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.894739 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.904939 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:21:42.782606598 +0000 UTC Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.997821 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.997914 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.997932 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.997954 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:29 crc kubenswrapper[4787]: I0219 19:20:29.997967 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:29Z","lastTransitionTime":"2026-02-19T19:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.100676 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.100734 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.100745 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.100765 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.100779 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.203597 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.203692 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.203710 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.203731 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.203744 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.306830 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.306875 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.306885 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.306902 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.306912 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.409950 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.409992 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.410002 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.410016 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.410027 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.513051 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.513083 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.513092 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.513107 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.513117 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.615927 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.616029 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.616054 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.616084 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.616108 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.719125 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.719182 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.719192 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.719209 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.719221 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.821895 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.821943 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.821956 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.821978 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.821993 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.891487 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:30 crc kubenswrapper[4787]: E0219 19:20:30.891676 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.905733 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:52:56.705597774 +0000 UTC Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.925100 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.925151 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.925163 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.925184 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:30 crc kubenswrapper[4787]: I0219 19:20:30.925198 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:30Z","lastTransitionTime":"2026-02-19T19:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.028063 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.028104 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.028113 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.028128 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.028138 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:31Z","lastTransitionTime":"2026-02-19T19:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.131492 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.131544 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.131557 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.131578 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.131594 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:31Z","lastTransitionTime":"2026-02-19T19:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.234929 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.235005 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.235047 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.235066 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.235078 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:31Z","lastTransitionTime":"2026-02-19T19:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.337449 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.337501 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.337514 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.337530 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.337542 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:31Z","lastTransitionTime":"2026-02-19T19:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.440896 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.440935 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.440944 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.440958 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.440969 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:31Z","lastTransitionTime":"2026-02-19T19:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.446533 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.446576 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.446585 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.446599 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.446637 4787 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T19:20:31Z","lastTransitionTime":"2026-02-19T19:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.495181 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp"] Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.495729 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.500263 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.500592 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.500773 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.500929 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.611874 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bc6502c-8055-430e-9e2a-6de38a52335c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.611919 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc6502c-8055-430e-9e2a-6de38a52335c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.611961 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bc6502c-8055-430e-9e2a-6de38a52335c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.611976 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc6502c-8055-430e-9e2a-6de38a52335c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.612000 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bc6502c-8055-430e-9e2a-6de38a52335c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713473 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bc6502c-8055-430e-9e2a-6de38a52335c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713526 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc6502c-8055-430e-9e2a-6de38a52335c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713559 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bc6502c-8055-430e-9e2a-6de38a52335c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc6502c-8055-430e-9e2a-6de38a52335c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713641 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bc6502c-8055-430e-9e2a-6de38a52335c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713707 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bc6502c-8055-430e-9e2a-6de38a52335c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.713706 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bc6502c-8055-430e-9e2a-6de38a52335c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.714718 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bc6502c-8055-430e-9e2a-6de38a52335c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.724030 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bc6502c-8055-430e-9e2a-6de38a52335c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.732677 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bc6502c-8055-430e-9e2a-6de38a52335c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dxgcp\" (UID: \"3bc6502c-8055-430e-9e2a-6de38a52335c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.814619 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.869445 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" event={"ID":"3bc6502c-8055-430e-9e2a-6de38a52335c","Type":"ContainerStarted","Data":"a547d59d5e6c4d0e557b6e6bdc8ff40794f91a49ea49d96db83603730790b427"} Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.891211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.891211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.891321 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:31 crc kubenswrapper[4787]: E0219 19:20:31.891488 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:31 crc kubenswrapper[4787]: E0219 19:20:31.891571 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:31 crc kubenswrapper[4787]: E0219 19:20:31.891745 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.906460 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:17:52.440446603 +0000 UTC Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.906491 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 19:20:31 crc kubenswrapper[4787]: I0219 19:20:31.915186 4787 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 19:20:32 crc kubenswrapper[4787]: I0219 19:20:32.873877 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" event={"ID":"3bc6502c-8055-430e-9e2a-6de38a52335c","Type":"ContainerStarted","Data":"39e94070f7b69cf0126b3f9305afa8f87bf72a22792afdbfc48eb1802d64d456"} Feb 19 19:20:32 crc kubenswrapper[4787]: I0219 19:20:32.891042 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dxgcp" podStartSLOduration=79.891022951 podStartE2EDuration="1m19.891022951s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:32.890031593 +0000 UTC m=+100.680697545" watchObservedRunningTime="2026-02-19 19:20:32.891022951 +0000 UTC m=+100.681688883" Feb 19 19:20:32 crc kubenswrapper[4787]: I0219 19:20:32.891500 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:32 crc kubenswrapper[4787]: E0219 19:20:32.893855 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:33 crc kubenswrapper[4787]: I0219 19:20:33.734306 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:33 crc kubenswrapper[4787]: E0219 19:20:33.734466 4787 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:20:33 crc kubenswrapper[4787]: E0219 19:20:33.734540 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs podName:56f25fce-8c35-4786-94f3-93854459f32a nodeName:}" failed. No retries permitted until 2026-02-19 19:21:37.734524918 +0000 UTC m=+165.525190860 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs") pod "network-metrics-daemon-cv5f6" (UID: "56f25fce-8c35-4786-94f3-93854459f32a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 19:20:33 crc kubenswrapper[4787]: I0219 19:20:33.891880 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:33 crc kubenswrapper[4787]: I0219 19:20:33.891995 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:33 crc kubenswrapper[4787]: I0219 19:20:33.892061 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:33 crc kubenswrapper[4787]: E0219 19:20:33.892205 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:33 crc kubenswrapper[4787]: E0219 19:20:33.892344 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:33 crc kubenswrapper[4787]: E0219 19:20:33.892549 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:34 crc kubenswrapper[4787]: I0219 19:20:34.891357 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:34 crc kubenswrapper[4787]: E0219 19:20:34.891556 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:35 crc kubenswrapper[4787]: I0219 19:20:35.891725 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:35 crc kubenswrapper[4787]: I0219 19:20:35.891725 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:35 crc kubenswrapper[4787]: I0219 19:20:35.891770 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:35 crc kubenswrapper[4787]: E0219 19:20:35.892291 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:35 crc kubenswrapper[4787]: E0219 19:20:35.892429 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:35 crc kubenswrapper[4787]: E0219 19:20:35.892585 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:36 crc kubenswrapper[4787]: I0219 19:20:36.891310 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:36 crc kubenswrapper[4787]: E0219 19:20:36.891521 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:37 crc kubenswrapper[4787]: I0219 19:20:37.890940 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:37 crc kubenswrapper[4787]: E0219 19:20:37.891128 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:37 crc kubenswrapper[4787]: I0219 19:20:37.891241 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:37 crc kubenswrapper[4787]: I0219 19:20:37.891347 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:37 crc kubenswrapper[4787]: E0219 19:20:37.891725 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:37 crc kubenswrapper[4787]: E0219 19:20:37.891894 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:37 crc kubenswrapper[4787]: I0219 19:20:37.892228 4787 scope.go:117] "RemoveContainer" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" Feb 19 19:20:37 crc kubenswrapper[4787]: E0219 19:20:37.892437 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5xjgd_openshift-ovn-kubernetes(4989ff60-0c48-4f78-bcf6-2d394ee929fd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" Feb 19 19:20:38 crc kubenswrapper[4787]: I0219 19:20:38.891127 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:38 crc kubenswrapper[4787]: E0219 19:20:38.891380 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:39 crc kubenswrapper[4787]: I0219 19:20:39.891304 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:39 crc kubenswrapper[4787]: I0219 19:20:39.891379 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:39 crc kubenswrapper[4787]: I0219 19:20:39.891413 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:39 crc kubenswrapper[4787]: E0219 19:20:39.891469 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:39 crc kubenswrapper[4787]: E0219 19:20:39.891673 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:39 crc kubenswrapper[4787]: E0219 19:20:39.891813 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:40 crc kubenswrapper[4787]: I0219 19:20:40.891156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:40 crc kubenswrapper[4787]: E0219 19:20:40.891325 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:41 crc kubenswrapper[4787]: I0219 19:20:41.891570 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:41 crc kubenswrapper[4787]: I0219 19:20:41.891724 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:41 crc kubenswrapper[4787]: I0219 19:20:41.891768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:41 crc kubenswrapper[4787]: E0219 19:20:41.891882 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:41 crc kubenswrapper[4787]: E0219 19:20:41.892041 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:41 crc kubenswrapper[4787]: E0219 19:20:41.892211 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:42 crc kubenswrapper[4787]: I0219 19:20:42.892770 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:42 crc kubenswrapper[4787]: E0219 19:20:42.893084 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:43 crc kubenswrapper[4787]: I0219 19:20:43.897051 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:43 crc kubenswrapper[4787]: I0219 19:20:43.897115 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:43 crc kubenswrapper[4787]: E0219 19:20:43.897279 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:43 crc kubenswrapper[4787]: E0219 19:20:43.897705 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:43 crc kubenswrapper[4787]: I0219 19:20:43.897999 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:43 crc kubenswrapper[4787]: E0219 19:20:43.898192 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:44 crc kubenswrapper[4787]: I0219 19:20:44.891561 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:44 crc kubenswrapper[4787]: E0219 19:20:44.892115 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:45 crc kubenswrapper[4787]: I0219 19:20:45.891598 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:45 crc kubenswrapper[4787]: I0219 19:20:45.891689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:45 crc kubenswrapper[4787]: I0219 19:20:45.891689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:45 crc kubenswrapper[4787]: E0219 19:20:45.891955 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:45 crc kubenswrapper[4787]: E0219 19:20:45.891993 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:45 crc kubenswrapper[4787]: E0219 19:20:45.892132 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:46 crc kubenswrapper[4787]: I0219 19:20:46.891993 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:46 crc kubenswrapper[4787]: E0219 19:20:46.892190 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:47 crc kubenswrapper[4787]: I0219 19:20:47.891795 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:47 crc kubenswrapper[4787]: I0219 19:20:47.891909 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:47 crc kubenswrapper[4787]: I0219 19:20:47.891799 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:47 crc kubenswrapper[4787]: E0219 19:20:47.892019 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:47 crc kubenswrapper[4787]: E0219 19:20:47.892132 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:47 crc kubenswrapper[4787]: E0219 19:20:47.892286 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:48 crc kubenswrapper[4787]: I0219 19:20:48.891940 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:48 crc kubenswrapper[4787]: E0219 19:20:48.892207 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.891137 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.891269 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.891137 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:49 crc kubenswrapper[4787]: E0219 19:20:49.891434 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:49 crc kubenswrapper[4787]: E0219 19:20:49.891676 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:49 crc kubenswrapper[4787]: E0219 19:20:49.891783 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.934680 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/1.log" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.935359 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/0.log" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.935441 4787 generic.go:334] "Generic (PLEG): container finished" podID="f0706129-aa73-40ed-899f-02882ed5a4cc" containerID="f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1" exitCode=1 Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.935503 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerDied","Data":"f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1"} Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.935574 4787 scope.go:117] "RemoveContainer" containerID="ec28cc3e6a7a4d383d8f388c86d2fae7dcce996f540adc989a6957840233b933" Feb 19 19:20:49 crc kubenswrapper[4787]: I0219 19:20:49.936460 4787 scope.go:117] "RemoveContainer" containerID="f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1" Feb 19 19:20:49 crc kubenswrapper[4787]: E0219 19:20:49.936840 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qxzkq_openshift-multus(f0706129-aa73-40ed-899f-02882ed5a4cc)\"" pod="openshift-multus/multus-qxzkq" podUID="f0706129-aa73-40ed-899f-02882ed5a4cc" Feb 19 19:20:50 crc kubenswrapper[4787]: I0219 19:20:50.891942 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:50 crc kubenswrapper[4787]: E0219 19:20:50.892201 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:50 crc kubenswrapper[4787]: I0219 19:20:50.943032 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/1.log" Feb 19 19:20:51 crc kubenswrapper[4787]: I0219 19:20:51.891631 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:51 crc kubenswrapper[4787]: I0219 19:20:51.891717 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:51 crc kubenswrapper[4787]: E0219 19:20:51.891857 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:51 crc kubenswrapper[4787]: I0219 19:20:51.891873 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:51 crc kubenswrapper[4787]: E0219 19:20:51.892409 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:51 crc kubenswrapper[4787]: E0219 19:20:51.892556 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:51 crc kubenswrapper[4787]: I0219 19:20:51.893014 4787 scope.go:117] "RemoveContainer" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" Feb 19 19:20:52 crc kubenswrapper[4787]: I0219 19:20:52.749337 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cv5f6"] Feb 19 19:20:52 crc kubenswrapper[4787]: I0219 19:20:52.750180 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:52 crc kubenswrapper[4787]: E0219 19:20:52.750382 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:52 crc kubenswrapper[4787]: E0219 19:20:52.852334 4787 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 19:20:52 crc kubenswrapper[4787]: I0219 19:20:52.951681 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/3.log" Feb 19 19:20:52 crc kubenswrapper[4787]: I0219 19:20:52.954603 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerStarted","Data":"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520"} Feb 19 19:20:52 crc kubenswrapper[4787]: I0219 19:20:52.956191 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:20:52 crc kubenswrapper[4787]: E0219 19:20:52.981494 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:20:53 crc kubenswrapper[4787]: I0219 19:20:53.891334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:53 crc kubenswrapper[4787]: I0219 19:20:53.891454 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:53 crc kubenswrapper[4787]: I0219 19:20:53.891454 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:53 crc kubenswrapper[4787]: E0219 19:20:53.891675 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:53 crc kubenswrapper[4787]: I0219 19:20:53.891753 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:53 crc kubenswrapper[4787]: E0219 19:20:53.891939 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:53 crc kubenswrapper[4787]: E0219 19:20:53.891911 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:53 crc kubenswrapper[4787]: E0219 19:20:53.892061 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:55 crc kubenswrapper[4787]: I0219 19:20:55.891376 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:55 crc kubenswrapper[4787]: I0219 19:20:55.891432 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:55 crc kubenswrapper[4787]: I0219 19:20:55.891446 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:55 crc kubenswrapper[4787]: I0219 19:20:55.891446 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:55 crc kubenswrapper[4787]: E0219 19:20:55.891583 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:55 crc kubenswrapper[4787]: E0219 19:20:55.891827 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:55 crc kubenswrapper[4787]: E0219 19:20:55.891935 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:55 crc kubenswrapper[4787]: E0219 19:20:55.892011 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:57 crc kubenswrapper[4787]: I0219 19:20:57.891582 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:57 crc kubenswrapper[4787]: I0219 19:20:57.891636 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:57 crc kubenswrapper[4787]: E0219 19:20:57.892128 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:57 crc kubenswrapper[4787]: I0219 19:20:57.891808 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:57 crc kubenswrapper[4787]: I0219 19:20:57.891722 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:57 crc kubenswrapper[4787]: E0219 19:20:57.892236 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:57 crc kubenswrapper[4787]: E0219 19:20:57.892348 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:57 crc kubenswrapper[4787]: E0219 19:20:57.892549 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:20:57 crc kubenswrapper[4787]: E0219 19:20:57.982956 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:20:59 crc kubenswrapper[4787]: I0219 19:20:59.891877 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:20:59 crc kubenswrapper[4787]: I0219 19:20:59.891993 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:20:59 crc kubenswrapper[4787]: E0219 19:20:59.892042 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:20:59 crc kubenswrapper[4787]: E0219 19:20:59.892196 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:20:59 crc kubenswrapper[4787]: I0219 19:20:59.891877 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:20:59 crc kubenswrapper[4787]: E0219 19:20:59.892338 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:20:59 crc kubenswrapper[4787]: I0219 19:20:59.892768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:20:59 crc kubenswrapper[4787]: E0219 19:20:59.892870 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:21:01 crc kubenswrapper[4787]: I0219 19:21:01.891095 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:01 crc kubenswrapper[4787]: I0219 19:21:01.891207 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:01 crc kubenswrapper[4787]: I0219 19:21:01.891309 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:01 crc kubenswrapper[4787]: E0219 19:21:01.891312 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:21:01 crc kubenswrapper[4787]: I0219 19:21:01.891363 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:01 crc kubenswrapper[4787]: E0219 19:21:01.891456 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:21:01 crc kubenswrapper[4787]: E0219 19:21:01.891530 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:21:01 crc kubenswrapper[4787]: E0219 19:21:01.891660 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:21:02 crc kubenswrapper[4787]: E0219 19:21:02.984187 4787 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:21:03 crc kubenswrapper[4787]: I0219 19:21:03.891479 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:03 crc kubenswrapper[4787]: I0219 19:21:03.891668 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:03 crc kubenswrapper[4787]: E0219 19:21:03.891733 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:21:03 crc kubenswrapper[4787]: I0219 19:21:03.891787 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:03 crc kubenswrapper[4787]: I0219 19:21:03.891682 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:03 crc kubenswrapper[4787]: E0219 19:21:03.892040 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:21:03 crc kubenswrapper[4787]: E0219 19:21:03.892160 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:21:03 crc kubenswrapper[4787]: E0219 19:21:03.892325 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:21:04 crc kubenswrapper[4787]: I0219 19:21:04.892423 4787 scope.go:117] "RemoveContainer" containerID="f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1" Feb 19 19:21:04 crc kubenswrapper[4787]: I0219 19:21:04.932477 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podStartSLOduration=111.932430878 podStartE2EDuration="1m51.932430878s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:20:52.987379546 +0000 UTC m=+120.778045488" watchObservedRunningTime="2026-02-19 19:21:04.932430878 +0000 UTC m=+132.723096860" Feb 19 19:21:05 crc kubenswrapper[4787]: I0219 19:21:05.891761 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:05 crc kubenswrapper[4787]: I0219 19:21:05.891783 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:05 crc kubenswrapper[4787]: E0219 19:21:05.892390 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:21:05 crc kubenswrapper[4787]: I0219 19:21:05.891854 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:05 crc kubenswrapper[4787]: E0219 19:21:05.892514 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:21:05 crc kubenswrapper[4787]: I0219 19:21:05.891797 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:05 crc kubenswrapper[4787]: E0219 19:21:05.892258 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:21:05 crc kubenswrapper[4787]: E0219 19:21:05.892655 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:21:06 crc kubenswrapper[4787]: I0219 19:21:06.002051 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/1.log" Feb 19 19:21:06 crc kubenswrapper[4787]: I0219 19:21:06.002118 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerStarted","Data":"ab6f912b26d7da8c204f3006c121135c14a78395a3837de5a8c6b3cba6c43a85"} Feb 19 19:21:07 crc kubenswrapper[4787]: I0219 19:21:07.891306 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:07 crc kubenswrapper[4787]: I0219 19:21:07.891404 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:07 crc kubenswrapper[4787]: I0219 19:21:07.891344 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:07 crc kubenswrapper[4787]: I0219 19:21:07.891308 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:07 crc kubenswrapper[4787]: E0219 19:21:07.891516 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 19:21:07 crc kubenswrapper[4787]: E0219 19:21:07.891728 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 19:21:07 crc kubenswrapper[4787]: E0219 19:21:07.891997 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cv5f6" podUID="56f25fce-8c35-4786-94f3-93854459f32a" Feb 19 19:21:07 crc kubenswrapper[4787]: E0219 19:21:07.892106 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.891767 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.891897 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.891767 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.891800 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.894594 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.898122 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.898328 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.898415 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.898591 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 19:21:09 crc kubenswrapper[4787]: I0219 19:21:09.900780 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.780694 4787 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.841767 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.842443 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.843191 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.844138 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.844280 4787 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.844356 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.845514 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8"] Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.846195 4787 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.846274 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.846197 4787 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.846353 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.846555 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.849796 4787 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.849863 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.849941 4787 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: secrets "openshift-apiserver-operator-dockercfg-xtcjv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.849962 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-dockercfg-xtcjv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.850023 4787 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.850040 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: W0219 19:21:11.850133 4787 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 19 19:21:11 crc kubenswrapper[4787]: E0219 19:21:11.850150 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.850184 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.850259 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.850435 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.850458 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.850704 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.850755 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.852526 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.854547 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.854877 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.855213 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.855413 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.855629 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.855854 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.866260 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hl7xp"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.866932 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.867560 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xplkr"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.868312 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.869750 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.870182 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.873574 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vkbfk"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.874842 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.878070 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.878300 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.880769 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.881675 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.881697 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6cngl"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.882388 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.884243 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.884521 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.885289 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.885896 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.886583 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4xqxs"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.887349 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.887860 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.911548 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.915334 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.917665 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2sgql"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931056 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931265 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931316 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931491 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931673 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931681 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931796 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.931820 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.932537 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.932840 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.933092 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.933437 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.933942 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934022 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934200 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934279 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934492 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934760 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934871 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.934694 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.935242 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.935327 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-h92w2"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.935766 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.937083 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.938066 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.938082 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939197 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939413 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939466 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939386 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939602 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.939679 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h6jhc"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.940502 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.940634 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.940758 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.940880 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.940929 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.941085 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.941322 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.941935 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.942032 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.942751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-audit-dir\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944589 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-audit-policies\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944654 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944688 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjdj\" (UniqueName: \"kubernetes.io/projected/42b96086-3538-440d-a1f9-cd86de6191c7-kube-api-access-gqjdj\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944732 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-config\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxlqd\" (UniqueName: \"kubernetes.io/projected/e08e5866-afcd-4355-a978-894053fa1cb3-kube-api-access-kxlqd\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-client-ca\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944814 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk47r\" (UniqueName: \"kubernetes.io/projected/840e5c14-8d41-4276-b4d4-b4eb62898080-kube-api-access-sk47r\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944838 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdk9\" (UniqueName: \"kubernetes.io/projected/eb7b606b-caed-4b2c-8db1-092c38d05ad0-kube-api-access-4rdk9\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944862 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ece0e8-135e-410d-b1ea-ca5ee78a3e3c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tmmbh\" (UID: \"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-serving-cert\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944921 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx57\" (UniqueName: \"kubernetes.io/projected/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-kube-api-access-czx57\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944945 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb7b606b-caed-4b2c-8db1-092c38d05ad0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944965 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49n7\" (UniqueName: \"kubernetes.io/projected/66d75551-4920-4009-9813-958055427d0e-kube-api-access-t49n7\") pod \"dns-operator-744455d44c-4xqxs\" (UID: \"66d75551-4920-4009-9813-958055427d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.944983 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945003 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-serving-cert\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945023 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b96086-3538-440d-a1f9-cd86de6191c7-trusted-ca\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840632f4-8486-42b2-9af9-22faec045a6f-config\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-encryption-config\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945080 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-images\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-config\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945116 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-config\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945136 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5mb\" (UniqueName: \"kubernetes.io/projected/aa6ee378-233f-4cbf-b43c-9569c6a41643-kube-api-access-th5mb\") pod \"downloads-7954f5f757-6cngl\" (UID: \"aa6ee378-233f-4cbf-b43c-9569c6a41643\") " pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945156 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl8xx\" (UniqueName: \"kubernetes.io/projected/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-kube-api-access-xl8xx\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945196 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b606b-caed-4b2c-8db1-092c38d05ad0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945216 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-client-ca\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945252 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08e5866-afcd-4355-a978-894053fa1cb3-serving-cert\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945271 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42b96086-3538-440d-a1f9-cd86de6191c7-config\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945291 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-client\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945314 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945337 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/840632f4-8486-42b2-9af9-22faec045a6f-machine-approver-tls\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945357 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66d75551-4920-4009-9813-958055427d0e-metrics-tls\") pod \"dns-operator-744455d44c-4xqxs\" (UID: \"66d75551-4920-4009-9813-958055427d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945377 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rstrl\" (UniqueName: \"kubernetes.io/projected/840632f4-8486-42b2-9af9-22faec045a6f-kube-api-access-rstrl\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqhg\" (UniqueName: \"kubernetes.io/projected/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-kube-api-access-txqhg\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945422 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nsv\" (UniqueName: \"kubernetes.io/projected/80ece0e8-135e-410d-b1ea-ca5ee78a3e3c-kube-api-access-v5nsv\") pod \"cluster-samples-operator-665b6dd947-tmmbh\" (UID: \"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945443 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840e5c14-8d41-4276-b4d4-b4eb62898080-serving-cert\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945461 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42b96086-3538-440d-a1f9-cd86de6191c7-serving-cert\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945489 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/840632f4-8486-42b2-9af9-22faec045a6f-auth-proxy-config\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.945727 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.946173 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.946825 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.949497 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.951231 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.951459 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7bpzk"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.951727 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.951785 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.953141 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.954497 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-shj95"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.955965 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.956468 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.956669 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.958234 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.962026 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.962345 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.962443 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.963067 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.964157 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.964906 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965307 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965448 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965636 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965784 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965866 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965902 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965960 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.966059 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.966074 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.966117 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965868 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.966149 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965789 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965796 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.966217 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.965923 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.969335 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tm84p"] Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.977713 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978296 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978361 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978392 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978483 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978527 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978740 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978765 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978814 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978900 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978951 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.979080 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.979286 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.978296 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.979969 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.990423 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 19:21:11 crc kubenswrapper[4787]: I0219 19:21:11.995310 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.000624 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.029223 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-js449"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.029900 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.030055 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.030270 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.030784 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.031584 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.031827 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.031592 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.032457 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.032658 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.033220 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gcqj4"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.035017 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.037159 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.046128 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.046666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf6w6\" (UniqueName: \"kubernetes.io/projected/9ded0889-3bf6-4276-9af2-a7a81df383ea-kube-api-access-vf6w6\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.046713 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42b96086-3538-440d-a1f9-cd86de6191c7-config\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.046772 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-client\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.046861 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08e5866-afcd-4355-a978-894053fa1cb3-serving-cert\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.046951 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047004 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047031 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/840632f4-8486-42b2-9af9-22faec045a6f-machine-approver-tls\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047092 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ct8m\" (UniqueName: \"kubernetes.io/projected/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-kube-api-access-9ct8m\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rstrl\" (UniqueName: \"kubernetes.io/projected/840632f4-8486-42b2-9af9-22faec045a6f-kube-api-access-rstrl\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047148 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqhg\" (UniqueName: \"kubernetes.io/projected/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-kube-api-access-txqhg\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/096ef5e0-07a5-4d8d-b217-c91a220b54b3-serving-cert\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047210 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66d75551-4920-4009-9813-958055427d0e-metrics-tls\") pod \"dns-operator-744455d44c-4xqxs\" (UID: \"66d75551-4920-4009-9813-958055427d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nsv\" (UniqueName: \"kubernetes.io/projected/80ece0e8-135e-410d-b1ea-ca5ee78a3e3c-kube-api-access-v5nsv\") pod \"cluster-samples-operator-665b6dd947-tmmbh\" (UID: \"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047270 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840e5c14-8d41-4276-b4d4-b4eb62898080-serving-cert\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42b96086-3538-440d-a1f9-cd86de6191c7-serving-cert\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047354 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/840632f4-8486-42b2-9af9-22faec045a6f-auth-proxy-config\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-service-ca\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047409 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2854140a-273c-41e8-9a78-611994b05d26-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7bpzk\" (UID: \"2854140a-273c-41e8-9a78-611994b05d26\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047436 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047467 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb42t\" (UniqueName: \"kubernetes.io/projected/5fbae455-2136-403f-bda5-236a4de586da-kube-api-access-sb42t\") pod \"control-plane-machine-set-operator-78cbb6b69f-htwzm\" (UID: \"5fbae455-2136-403f-bda5-236a4de586da\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047502 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-audit-dir\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047536 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-serving-cert\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047563 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-service-ca-bundle\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047593 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-audit-policies\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047642 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047672 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fbae455-2136-403f-bda5-236a4de586da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-htwzm\" (UID: \"5fbae455-2136-403f-bda5-236a4de586da\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047696 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjdj\" (UniqueName: \"kubernetes.io/projected/42b96086-3538-440d-a1f9-cd86de6191c7-kube-api-access-gqjdj\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftr7f\" (UniqueName: \"kubernetes.io/projected/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-kube-api-access-ftr7f\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-config\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047813 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxlqd\" (UniqueName: \"kubernetes.io/projected/e08e5866-afcd-4355-a978-894053fa1cb3-kube-api-access-kxlqd\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047830 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42b96086-3538-440d-a1f9-cd86de6191c7-config\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-client\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-trusted-ca-bundle\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047961 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kkh\" (UniqueName: \"kubernetes.io/projected/70c356be-c7d4-479a-a357-4cfe97e5e9c9-kube-api-access-27kkh\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.047984 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbrt\" (UniqueName: \"kubernetes.io/projected/2854140a-273c-41e8-9a78-611994b05d26-kube-api-access-4fbrt\") pod \"multus-admission-controller-857f4d67dd-7bpzk\" (UID: \"2854140a-273c-41e8-9a78-611994b05d26\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048008 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048030 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-client-ca\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048053 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/92a6d23d-0714-44b6-9e17-b9d5d93824c1-signing-cabundle\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a448d36-c533-49ee-8815-d8190936ac39-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048164 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk47r\" (UniqueName: \"kubernetes.io/projected/840e5c14-8d41-4276-b4d4-b4eb62898080-kube-api-access-sk47r\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048203 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rdk9\" (UniqueName: \"kubernetes.io/projected/eb7b606b-caed-4b2c-8db1-092c38d05ad0-kube-api-access-4rdk9\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-config\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048244 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-oauth-config\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048265 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ded0889-3bf6-4276-9af2-a7a81df383ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048285 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ece0e8-135e-410d-b1ea-ca5ee78a3e3c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tmmbh\" (UID: \"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048304 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-config\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048324 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-ca\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048439 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-serving-cert\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048462 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/92a6d23d-0714-44b6-9e17-b9d5d93824c1-signing-key\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048480 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/607f701f-0640-4730-a751-da18d229b3f8-images\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048507 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czx57\" (UniqueName: \"kubernetes.io/projected/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-kube-api-access-czx57\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/607f701f-0640-4730-a751-da18d229b3f8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048547 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048566 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvhmr\" (UniqueName: \"kubernetes.io/projected/1f465edc-b94b-4a9d-9f9c-1540bb933c8d-kube-api-access-hvhmr\") pod \"package-server-manager-789f6589d5-fd6kc\" (UID: \"1f465edc-b94b-4a9d-9f9c-1540bb933c8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048622 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb7b606b-caed-4b2c-8db1-092c38d05ad0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048648 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9tld\" (UniqueName: \"kubernetes.io/projected/607f701f-0640-4730-a751-da18d229b3f8-kube-api-access-r9tld\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048683 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-secret-volume\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-config\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048721 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t49n7\" (UniqueName: \"kubernetes.io/projected/66d75551-4920-4009-9813-958055427d0e-kube-api-access-t49n7\") pod \"dns-operator-744455d44c-4xqxs\" (UID: \"66d75551-4920-4009-9813-958055427d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-serving-cert\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048759 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048796 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ded0889-3bf6-4276-9af2-a7a81df383ea-proxy-tls\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b96086-3538-440d-a1f9-cd86de6191c7-trusted-ca\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048837 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2fhf\" (UniqueName: \"kubernetes.io/projected/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-kube-api-access-z2fhf\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048876 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-encryption-config\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048899 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-images\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7fh\" (UniqueName: \"kubernetes.io/projected/92a6d23d-0714-44b6-9e17-b9d5d93824c1-kube-api-access-qb7fh\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.048992 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-config\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049022 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840632f4-8486-42b2-9af9-22faec045a6f-config\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049046 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/607f701f-0640-4730-a751-da18d229b3f8-proxy-tls\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-config-volume\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049167 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a448d36-c533-49ee-8815-d8190936ac39-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-config\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-service-ca\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049482 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-config\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049510 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5mb\" (UniqueName: \"kubernetes.io/projected/aa6ee378-233f-4cbf-b43c-9569c6a41643-kube-api-access-th5mb\") pod \"downloads-7954f5f757-6cngl\" (UID: \"aa6ee378-233f-4cbf-b43c-9569c6a41643\") " pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049536 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl8xx\" (UniqueName: \"kubernetes.io/projected/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-kube-api-access-xl8xx\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049563 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbl5t\" (UniqueName: \"kubernetes.io/projected/096ef5e0-07a5-4d8d-b217-c91a220b54b3-kube-api-access-lbl5t\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049588 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgb4w\" (UniqueName: \"kubernetes.io/projected/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-kube-api-access-tgb4w\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049752 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f465edc-b94b-4a9d-9f9c-1540bb933c8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd6kc\" (UID: \"1f465edc-b94b-4a9d-9f9c-1540bb933c8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049785 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b606b-caed-4b2c-8db1-092c38d05ad0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049819 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-client-ca\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a448d36-c533-49ee-8815-d8190936ac39-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049870 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-serving-cert\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-oauth-serving-cert\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.049924 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.050853 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-client-ca\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.051735 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-audit-dir\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.052101 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qz8z6"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.052752 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.053022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-config\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.053142 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.053167 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-29dzb"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.053263 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.053644 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.054051 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.054660 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.055000 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-audit-policies\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.055784 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.055998 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/840632f4-8486-42b2-9af9-22faec045a6f-auth-proxy-config\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.056758 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840632f4-8486-42b2-9af9-22faec045a6f-config\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.057708 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-config\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.057816 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.057760 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.058074 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.058094 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-config\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.058546 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.059251 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.059376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-client-ca\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.060431 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-encryption-config\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.060576 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.061089 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kpxw8"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.061435 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.061493 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.061641 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.061647 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.061965 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-images\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.062256 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.062474 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.062537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42b96086-3538-440d-a1f9-cd86de6191c7-serving-cert\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.063690 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-serving-cert\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.063978 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ece0e8-135e-410d-b1ea-ca5ee78a3e3c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tmmbh\" (UID: \"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.064145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-client\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.064477 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42b96086-3538-440d-a1f9-cd86de6191c7-trusted-ca\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.064965 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/840632f4-8486-42b2-9af9-22faec045a6f-machine-approver-tls\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.065100 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.066425 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08e5866-afcd-4355-a978-894053fa1cb3-serving-cert\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.069903 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.070076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66d75551-4920-4009-9813-958055427d0e-metrics-tls\") pod \"dns-operator-744455d44c-4xqxs\" (UID: \"66d75551-4920-4009-9813-958055427d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.070093 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840e5c14-8d41-4276-b4d4-b4eb62898080-serving-cert\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.071125 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xplkr"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.072155 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-serving-cert\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.072828 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.074748 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4xqxs"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.078431 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6cngl"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.078948 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.080380 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h92w2"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.083220 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-shj95"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.083522 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.084920 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.085131 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.085847 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5c8rt"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.087771 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2sgql"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.087961 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.093178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.102207 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.104990 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.106631 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.112091 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.113203 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-js449"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.114264 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vkbfk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.115408 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hl7xp"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.116480 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.117573 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.118598 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tm84p"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.119786 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7bpzk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.120819 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h6jhc"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.122588 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6xptg"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.124020 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.125208 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dfvsh"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.125576 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.125696 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.126941 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.128398 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gcqj4"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.130784 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.132164 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.133193 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.134476 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.138338 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qz8z6"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.141726 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5c8rt"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.142185 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.143290 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6xptg"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.145271 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.145992 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.147449 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.149080 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150143 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kpxw8"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150762 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-serving-cert\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150796 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-service-ca-bundle\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150822 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb42t\" (UniqueName: \"kubernetes.io/projected/5fbae455-2136-403f-bda5-236a4de586da-kube-api-access-sb42t\") pod \"control-plane-machine-set-operator-78cbb6b69f-htwzm\" (UID: \"5fbae455-2136-403f-bda5-236a4de586da\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150878 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fbae455-2136-403f-bda5-236a4de586da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-htwzm\" (UID: \"5fbae455-2136-403f-bda5-236a4de586da\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150933 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftr7f\" (UniqueName: \"kubernetes.io/projected/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-kube-api-access-ftr7f\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150961 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-client\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.150982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-trusted-ca-bundle\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151004 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27kkh\" (UniqueName: \"kubernetes.io/projected/70c356be-c7d4-479a-a357-4cfe97e5e9c9-kube-api-access-27kkh\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151025 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbrt\" (UniqueName: \"kubernetes.io/projected/2854140a-273c-41e8-9a78-611994b05d26-kube-api-access-4fbrt\") pod \"multus-admission-controller-857f4d67dd-7bpzk\" (UID: \"2854140a-273c-41e8-9a78-611994b05d26\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151044 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/92a6d23d-0714-44b6-9e17-b9d5d93824c1-signing-cabundle\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151094 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-config\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151113 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a448d36-c533-49ee-8815-d8190936ac39-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151131 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-oauth-config\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151153 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ded0889-3bf6-4276-9af2-a7a81df383ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151170 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-config\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151185 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-ca\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151247 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/92a6d23d-0714-44b6-9e17-b9d5d93824c1-signing-key\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151284 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/607f701f-0640-4730-a751-da18d229b3f8-images\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151341 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/607f701f-0640-4730-a751-da18d229b3f8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151362 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvhmr\" (UniqueName: \"kubernetes.io/projected/1f465edc-b94b-4a9d-9f9c-1540bb933c8d-kube-api-access-hvhmr\") pod \"package-server-manager-789f6589d5-fd6kc\" (UID: \"1f465edc-b94b-4a9d-9f9c-1540bb933c8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151428 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-secret-volume\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151484 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-config\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9tld\" (UniqueName: \"kubernetes.io/projected/607f701f-0640-4730-a751-da18d229b3f8-kube-api-access-r9tld\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ded0889-3bf6-4276-9af2-a7a81df383ea-proxy-tls\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151602 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-service-ca-bundle\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151683 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2fhf\" (UniqueName: \"kubernetes.io/projected/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-kube-api-access-z2fhf\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7fh\" (UniqueName: \"kubernetes.io/projected/92a6d23d-0714-44b6-9e17-b9d5d93824c1-kube-api-access-qb7fh\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151733 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-config\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151760 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/607f701f-0640-4730-a751-da18d229b3f8-proxy-tls\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-config-volume\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151804 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a448d36-c533-49ee-8815-d8190936ac39-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151834 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-service-ca\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151878 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbl5t\" (UniqueName: \"kubernetes.io/projected/096ef5e0-07a5-4d8d-b217-c91a220b54b3-kube-api-access-lbl5t\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgb4w\" (UniqueName: \"kubernetes.io/projected/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-kube-api-access-tgb4w\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151936 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f465edc-b94b-4a9d-9f9c-1540bb933c8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd6kc\" (UID: \"1f465edc-b94b-4a9d-9f9c-1540bb933c8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.151974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a448d36-c533-49ee-8815-d8190936ac39-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.152132 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.152195 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-serving-cert\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.153233 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-ca\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.152361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-oauth-serving-cert\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.153516 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf6w6\" (UniqueName: \"kubernetes.io/projected/9ded0889-3bf6-4276-9af2-a7a81df383ea-kube-api-access-vf6w6\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.153823 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.153926 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/607f701f-0640-4730-a751-da18d229b3f8-images\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.154482 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ct8m\" (UniqueName: \"kubernetes.io/projected/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-kube-api-access-9ct8m\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.154525 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.154553 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.154580 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/096ef5e0-07a5-4d8d-b217-c91a220b54b3-serving-cert\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.154694 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-service-ca\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.154725 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2854140a-273c-41e8-9a78-611994b05d26-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7bpzk\" (UID: \"2854140a-273c-41e8-9a78-611994b05d26\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.155152 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9ded0889-3bf6-4276-9af2-a7a81df383ea-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.155156 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-config\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.155161 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/607f701f-0640-4730-a751-da18d229b3f8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157105 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-client\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157113 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/096ef5e0-07a5-4d8d-b217-c91a220b54b3-etcd-service-ca\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157221 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-trusted-ca-bundle\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157334 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-oauth-serving-cert\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157586 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157682 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-service-ca\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.157777 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-config\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.158219 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-config\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.158234 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/607f701f-0640-4730-a751-da18d229b3f8-proxy-tls\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.158805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.158813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-config\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.158985 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-serving-cert\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.159185 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.159416 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9ded0889-3bf6-4276-9af2-a7a81df383ea-proxy-tls\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.159497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-serving-cert\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.159679 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2854140a-273c-41e8-9a78-611994b05d26-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7bpzk\" (UID: \"2854140a-273c-41e8-9a78-611994b05d26\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.160678 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-oauth-config\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.160743 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5fbae455-2136-403f-bda5-236a4de586da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-htwzm\" (UID: \"5fbae455-2136-403f-bda5-236a4de586da\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.160897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.160980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/096ef5e0-07a5-4d8d-b217-c91a220b54b3-serving-cert\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.161231 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.161667 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q52vk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.162055 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f465edc-b94b-4a9d-9f9c-1540bb933c8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd6kc\" (UID: \"1f465edc-b94b-4a9d-9f9c-1540bb933c8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.162926 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.163165 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q52vk"] Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.164772 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.176048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/92a6d23d-0714-44b6-9e17-b9d5d93824c1-signing-key\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.185068 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.192756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/92a6d23d-0714-44b6-9e17-b9d5d93824c1-signing-cabundle\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.205140 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.225377 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.235452 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-config-volume\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.245883 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.264991 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.276863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-secret-volume\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.305664 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.332261 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.344897 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.365373 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.386855 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.395575 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a448d36-c533-49ee-8815-d8190936ac39-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.404555 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.406260 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a448d36-c533-49ee-8815-d8190936ac39-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.425747 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.445908 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.465810 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.485173 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.505107 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.525150 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.565501 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.605312 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.605787 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.624799 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.646542 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.665739 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.685178 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.705514 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.725319 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.745497 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.765239 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.790557 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.805660 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.825106 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.856996 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.898661 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxlqd\" (UniqueName: \"kubernetes.io/projected/e08e5866-afcd-4355-a978-894053fa1cb3-kube-api-access-kxlqd\") pod \"controller-manager-879f6c89f-hl7xp\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.903403 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjdj\" (UniqueName: \"kubernetes.io/projected/42b96086-3538-440d-a1f9-cd86de6191c7-kube-api-access-gqjdj\") pod \"console-operator-58897d9998-vkbfk\" (UID: \"42b96086-3538-440d-a1f9-cd86de6191c7\") " pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.920865 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czx57\" (UniqueName: \"kubernetes.io/projected/32ca8e62-696d-4f05-9ba2-b8fbc20e407f-kube-api-access-czx57\") pod \"openshift-config-operator-7777fb866f-6rqjl\" (UID: \"32ca8e62-696d-4f05-9ba2-b8fbc20e407f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.925893 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.932944 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.950105 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.981920 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rstrl\" (UniqueName: \"kubernetes.io/projected/840632f4-8486-42b2-9af9-22faec045a6f-kube-api-access-rstrl\") pod \"machine-approver-56656f9798-tmcl8\" (UID: \"840632f4-8486-42b2-9af9-22faec045a6f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:12 crc kubenswrapper[4787]: I0219 19:21:12.997443 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqhg\" (UniqueName: \"kubernetes.io/projected/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-kube-api-access-txqhg\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.003961 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t49n7\" (UniqueName: \"kubernetes.io/projected/66d75551-4920-4009-9813-958055427d0e-kube-api-access-t49n7\") pod \"dns-operator-744455d44c-4xqxs\" (UID: \"66d75551-4920-4009-9813-958055427d0e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.005487 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.023625 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.025718 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.048597 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: E0219 19:21:13.050923 4787 secret.go:188] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 19:21:13 crc kubenswrapper[4787]: E0219 19:21:13.050989 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb7b606b-caed-4b2c-8db1-092c38d05ad0-serving-cert podName:eb7b606b-caed-4b2c-8db1-092c38d05ad0 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:13.550968414 +0000 UTC m=+141.341634356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eb7b606b-caed-4b2c-8db1-092c38d05ad0-serving-cert") pod "openshift-apiserver-operator-796bbdcf4f-gzbjk" (UID: "eb7b606b-caed-4b2c-8db1-092c38d05ad0") : failed to sync secret cache: timed out waiting for the condition Feb 19 19:21:13 crc kubenswrapper[4787]: E0219 19:21:13.062036 4787 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:13 crc kubenswrapper[4787]: E0219 19:21:13.062102 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb7b606b-caed-4b2c-8db1-092c38d05ad0-config podName:eb7b606b-caed-4b2c-8db1-092c38d05ad0 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:13.562084394 +0000 UTC m=+141.352750336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eb7b606b-caed-4b2c-8db1-092c38d05ad0-config") pod "openshift-apiserver-operator-796bbdcf4f-gzbjk" (UID: "eb7b606b-caed-4b2c-8db1-092c38d05ad0") : failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:13 crc kubenswrapper[4787]: E0219 19:21:13.062112 4787 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:13 crc kubenswrapper[4787]: E0219 19:21:13.062274 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-serving-ca podName:b7424e23-a3c1-4e60-87c8-db2ad78ba2a9 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:13.562236078 +0000 UTC m=+141.352902210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-serving-ca") pod "apiserver-7bbb656c7d-qzrkf" (UID: "b7424e23-a3c1-4e60-87c8-db2ad78ba2a9") : failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.063498 4787 request.go:700] Waited for 1.008884936s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0 Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.071481 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.107193 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl8xx\" (UniqueName: \"kubernetes.io/projected/8ff3a7ef-0a27-40ac-8a37-186f0d4f0939-kube-api-access-xl8xx\") pod \"machine-api-operator-5694c8668f-xplkr\" (UID: \"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.126203 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.128914 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5mb\" (UniqueName: \"kubernetes.io/projected/aa6ee378-233f-4cbf-b43c-9569c6a41643-kube-api-access-th5mb\") pod \"downloads-7954f5f757-6cngl\" (UID: \"aa6ee378-233f-4cbf-b43c-9569c6a41643\") " pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.151768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.172860 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.188387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.188663 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk47r\" (UniqueName: \"kubernetes.io/projected/840e5c14-8d41-4276-b4d4-b4eb62898080-kube-api-access-sk47r\") pod \"route-controller-manager-6576b87f9c-zz4h5\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.190745 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl"] Feb 19 19:21:13 crc kubenswrapper[4787]: W0219 19:21:13.191803 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840632f4_8486_42b2_9af9_22faec045a6f.slice/crio-237f70a5ba44437022cafa9217a19b93163e8a5388125026244b753dfe4b7ead WatchSource:0}: Error finding container 237f70a5ba44437022cafa9217a19b93163e8a5388125026244b753dfe4b7ead: Status 404 returned error can't find the container with id 237f70a5ba44437022cafa9217a19b93163e8a5388125026244b753dfe4b7ead Feb 19 19:21:13 crc kubenswrapper[4787]: W0219 19:21:13.201427 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ca8e62_696d_4f05_9ba2_b8fbc20e407f.slice/crio-3e03ebdf2173ea6c6bc7e1c04bb885848f0f59a044011c0b2acc6d3538c66451 WatchSource:0}: Error finding container 3e03ebdf2173ea6c6bc7e1c04bb885848f0f59a044011c0b2acc6d3538c66451: Status 404 returned error can't find the container with id 3e03ebdf2173ea6c6bc7e1c04bb885848f0f59a044011c0b2acc6d3538c66451 Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.221228 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nsv\" (UniqueName: \"kubernetes.io/projected/80ece0e8-135e-410d-b1ea-ca5ee78a3e3c-kube-api-access-v5nsv\") pod \"cluster-samples-operator-665b6dd947-tmmbh\" (UID: \"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.221704 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.226078 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.228505 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vkbfk"] Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.256460 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.265655 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.271831 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4xqxs"] Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.285543 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.305326 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.306651 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 19:21:13 crc kubenswrapper[4787]: W0219 19:21:13.312302 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66d75551_4920_4009_9813_958055427d0e.slice/crio-650b0bbaf0c8a0e886d5860bf5acdea4d0ee8bb60f3af8685b11b03cb7b4b3ff WatchSource:0}: Error finding container 650b0bbaf0c8a0e886d5860bf5acdea4d0ee8bb60f3af8685b11b03cb7b4b3ff: Status 404 returned error can't find the container with id 650b0bbaf0c8a0e886d5860bf5acdea4d0ee8bb60f3af8685b11b03cb7b4b3ff Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.316343 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.325675 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.333749 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.345501 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.366906 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.386272 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hl7xp"] Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.387533 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.407293 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 19:21:13 crc kubenswrapper[4787]: W0219 19:21:13.408254 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08e5866_afcd_4355_a978_894053fa1cb3.slice/crio-afad83d09882d4a4ef91585be014d43673079c6baf6803943ba54905b57ca079 WatchSource:0}: Error finding container afad83d09882d4a4ef91585be014d43673079c6baf6803943ba54905b57ca079: Status 404 returned error can't find the container with id afad83d09882d4a4ef91585be014d43673079c6baf6803943ba54905b57ca079 Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.429772 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.448394 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.466394 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.473353 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xplkr"] Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.489757 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.515003 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.527981 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.549815 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.566160 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.584747 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.587988 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb7b606b-caed-4b2c-8db1-092c38d05ad0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.588119 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.588153 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b606b-caed-4b2c-8db1-092c38d05ad0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:13 crc kubenswrapper[4787]: W0219 19:21:13.592211 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ff3a7ef_0a27_40ac_8a37_186f0d4f0939.slice/crio-f9a202a4b5b5cf42559368d74baf4928f7ceae6ef74f484cc8cf194f39080148 WatchSource:0}: Error finding container f9a202a4b5b5cf42559368d74baf4928f7ceae6ef74f484cc8cf194f39080148: Status 404 returned error can't find the container with id f9a202a4b5b5cf42559368d74baf4928f7ceae6ef74f484cc8cf194f39080148 Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.606379 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.627058 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.631195 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5"] Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.649360 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.666647 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.686136 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.693398 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6cngl"] Feb 19 19:21:13 crc kubenswrapper[4787]: W0219 19:21:13.707434 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6ee378_233f_4cbf_b43c_9569c6a41643.slice/crio-394a1c89d35f1fedc1546f38d7cd8a211a6ab3bd2107d9c2f6ae26ede9324ef6 WatchSource:0}: Error finding container 394a1c89d35f1fedc1546f38d7cd8a211a6ab3bd2107d9c2f6ae26ede9324ef6: Status 404 returned error can't find the container with id 394a1c89d35f1fedc1546f38d7cd8a211a6ab3bd2107d9c2f6ae26ede9324ef6 Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.714561 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.725187 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.744962 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.751517 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh"] Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.765777 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.785086 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.805764 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.826395 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.845243 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.865394 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.884646 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.906226 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.925918 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.946520 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.965061 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 19:21:13 crc kubenswrapper[4787]: I0219 19:21:13.986040 4787 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.006413 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.029618 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.050315 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.065733 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.067334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" event={"ID":"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c","Type":"ContainerStarted","Data":"fc99a5fd17e92c51c5b6478332091ff678ce31196632c9abcca105b47fa4ccfe"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.067379 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" event={"ID":"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c","Type":"ContainerStarted","Data":"eab52dad0563169118e93aabccf78f8c1409434bc6fc71feab8ac8d90347d17d"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.070401 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" event={"ID":"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939","Type":"ContainerStarted","Data":"bf459cdf6962ced663bcd927fe452526705c8d33fcbcdb3bb34f0048e500d25f"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.070435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" event={"ID":"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939","Type":"ContainerStarted","Data":"f411fd8b96ffcb75dcdb0275870f22521867e41d973d5e2919f0467f6441a16a"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.070457 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" event={"ID":"8ff3a7ef-0a27-40ac-8a37-186f0d4f0939","Type":"ContainerStarted","Data":"f9a202a4b5b5cf42559368d74baf4928f7ceae6ef74f484cc8cf194f39080148"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.072689 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" event={"ID":"42b96086-3538-440d-a1f9-cd86de6191c7","Type":"ContainerStarted","Data":"f68a0763fbd50bbb371afa97cfce2cd209396da986095a15dc90fecd445432bd"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.072717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" event={"ID":"42b96086-3538-440d-a1f9-cd86de6191c7","Type":"ContainerStarted","Data":"798d61360183945a7a975ca01de4162583c7fc536b4ceececbda49ab92da3856"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.073620 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.075547 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.075585 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.075925 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6cngl" event={"ID":"aa6ee378-233f-4cbf-b43c-9569c6a41643","Type":"ContainerStarted","Data":"b95244767d99ad1377e70ab8235113b6e8bda86970690b104747c17ac00d13ad"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.075950 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6cngl" event={"ID":"aa6ee378-233f-4cbf-b43c-9569c6a41643","Type":"ContainerStarted","Data":"394a1c89d35f1fedc1546f38d7cd8a211a6ab3bd2107d9c2f6ae26ede9324ef6"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.077239 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.079054 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.079109 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.079653 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" event={"ID":"840e5c14-8d41-4276-b4d4-b4eb62898080","Type":"ContainerStarted","Data":"f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.079804 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" event={"ID":"840e5c14-8d41-4276-b4d4-b4eb62898080","Type":"ContainerStarted","Data":"bcbd96ea1961434d299d5cf05aa67620cdda5c9f56b5709d1f75071e282892fb"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.080235 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.082562 4787 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zz4h5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.082594 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" podUID="840e5c14-8d41-4276-b4d4-b4eb62898080" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.082820 4787 request.go:700] Waited for 1.931457327s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.084980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" event={"ID":"840632f4-8486-42b2-9af9-22faec045a6f","Type":"ContainerStarted","Data":"889aad1efa68fbda3ff39b830d6d13d9a247a96aed3f77e7e809b456680444a5"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.085031 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" event={"ID":"840632f4-8486-42b2-9af9-22faec045a6f","Type":"ContainerStarted","Data":"1f138e432ae84c2f8911516a17dd59c778489ef6b0a5e611c2c70795f54e6ca1"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.085042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" event={"ID":"840632f4-8486-42b2-9af9-22faec045a6f","Type":"ContainerStarted","Data":"237f70a5ba44437022cafa9217a19b93163e8a5388125026244b753dfe4b7ead"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.087936 4787 generic.go:334] "Generic (PLEG): container finished" podID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerID="852fd409001418233f33afb82290842dfa23e68eb40a46d45f4b61d689138257" exitCode=0 Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.088014 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" event={"ID":"32ca8e62-696d-4f05-9ba2-b8fbc20e407f","Type":"ContainerDied","Data":"852fd409001418233f33afb82290842dfa23e68eb40a46d45f4b61d689138257"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.088038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" event={"ID":"32ca8e62-696d-4f05-9ba2-b8fbc20e407f","Type":"ContainerStarted","Data":"3e03ebdf2173ea6c6bc7e1c04bb885848f0f59a044011c0b2acc6d3538c66451"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.091931 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" event={"ID":"66d75551-4920-4009-9813-958055427d0e","Type":"ContainerStarted","Data":"eeb0ffe463a3e9f9b0bce7d072b6dff53c5607ddfd54858bf38bce05938e2edc"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.092524 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" event={"ID":"66d75551-4920-4009-9813-958055427d0e","Type":"ContainerStarted","Data":"97ed94b52f3923270afa025d11824d296b50579133b212bff4b4846ecb0a39df"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.092635 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" event={"ID":"66d75551-4920-4009-9813-958055427d0e","Type":"ContainerStarted","Data":"650b0bbaf0c8a0e886d5860bf5acdea4d0ee8bb60f3af8685b11b03cb7b4b3ff"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.094761 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" event={"ID":"e08e5866-afcd-4355-a978-894053fa1cb3","Type":"ContainerStarted","Data":"388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.094851 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" event={"ID":"e08e5866-afcd-4355-a978-894053fa1cb3","Type":"ContainerStarted","Data":"afad83d09882d4a4ef91585be014d43673079c6baf6803943ba54905b57ca079"} Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.095631 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.098481 4787 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hl7xp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.098679 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" podUID="e08e5866-afcd-4355-a978-894053fa1cb3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.105058 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb42t\" (UniqueName: \"kubernetes.io/projected/5fbae455-2136-403f-bda5-236a4de586da-kube-api-access-sb42t\") pod \"control-plane-machine-set-operator-78cbb6b69f-htwzm\" (UID: \"5fbae455-2136-403f-bda5-236a4de586da\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.123036 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27kkh\" (UniqueName: \"kubernetes.io/projected/70c356be-c7d4-479a-a357-4cfe97e5e9c9-kube-api-access-27kkh\") pod \"console-f9d7485db-h92w2\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.144236 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbrt\" (UniqueName: \"kubernetes.io/projected/2854140a-273c-41e8-9a78-611994b05d26-kube-api-access-4fbrt\") pod \"multus-admission-controller-857f4d67dd-7bpzk\" (UID: \"2854140a-273c-41e8-9a78-611994b05d26\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.158688 4787 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.162411 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvhmr\" (UniqueName: \"kubernetes.io/projected/1f465edc-b94b-4a9d-9f9c-1540bb933c8d-kube-api-access-hvhmr\") pod \"package-server-manager-789f6589d5-fd6kc\" (UID: \"1f465edc-b94b-4a9d-9f9c-1540bb933c8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.182284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9tld\" (UniqueName: \"kubernetes.io/projected/607f701f-0640-4730-a751-da18d229b3f8-kube-api-access-r9tld\") pod \"machine-config-operator-74547568cd-z7nbp\" (UID: \"607f701f-0640-4730-a751-da18d229b3f8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.203704 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftr7f\" (UniqueName: \"kubernetes.io/projected/69c5b2ea-5cb5-40ca-ba80-0e7266b80143-kube-api-access-ftr7f\") pod \"kube-storage-version-migrator-operator-b67b599dd-qf2tm\" (UID: \"69c5b2ea-5cb5-40ca-ba80-0e7266b80143\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.224214 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2fhf\" (UniqueName: \"kubernetes.io/projected/76d9faee-2bf0-4d62-89bc-ee58fa26a36f-kube-api-access-z2fhf\") pod \"openshift-controller-manager-operator-756b6f6bc6-wl9fg\" (UID: \"76d9faee-2bf0-4d62-89bc-ee58fa26a36f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.242480 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7fh\" (UniqueName: \"kubernetes.io/projected/92a6d23d-0714-44b6-9e17-b9d5d93824c1-kube-api-access-qb7fh\") pod \"service-ca-9c57cc56f-tm84p\" (UID: \"92a6d23d-0714-44b6-9e17-b9d5d93824c1\") " pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.248632 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.256871 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.264165 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bxj92\" (UID: \"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.282886 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ct8m\" (UniqueName: \"kubernetes.io/projected/fe0d7abc-7a42-4ba4-8403-c6b9dd202217-kube-api-access-9ct8m\") pod \"authentication-operator-69f744f599-2sgql\" (UID: \"fe0d7abc-7a42-4ba4-8403-c6b9dd202217\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.296327 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.304374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf6w6\" (UniqueName: \"kubernetes.io/projected/9ded0889-3bf6-4276-9af2-a7a81df383ea-kube-api-access-vf6w6\") pod \"machine-config-controller-84d6567774-shj95\" (UID: \"9ded0889-3bf6-4276-9af2-a7a81df383ea\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.321300 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbl5t\" (UniqueName: \"kubernetes.io/projected/096ef5e0-07a5-4d8d-b217-c91a220b54b3-kube-api-access-lbl5t\") pod \"etcd-operator-b45778765-h6jhc\" (UID: \"096ef5e0-07a5-4d8d-b217-c91a220b54b3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.333848 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.334135 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.339347 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.346944 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.353916 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.361759 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a448d36-c533-49ee-8815-d8190936ac39-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mc57j\" (UID: \"5a448d36-c533-49ee-8815-d8190936ac39\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.361971 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.365507 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.366849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgb4w\" (UniqueName: \"kubernetes.io/projected/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-kube-api-access-tgb4w\") pod \"collect-profiles-29525475-bdlsx\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.367547 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.376822 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.385071 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.396889 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.407417 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.424964 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.467725 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc"] Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.487881 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.489572 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7424e23-a3c1-4e60-87c8-db2ad78ba2a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qzrkf\" (UID: \"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.500413 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-tls\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.500488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae563ac8-197b-4860-a115-175a12fa1690-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.500513 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae563ac8-197b-4860-a115-175a12fa1690-config\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.500531 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae563ac8-197b-4860-a115-175a12fa1690-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.500575 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-certificates\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.500597 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48098b79-2446-4f86-a42a-e6f12ab783d5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.501067 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-trusted-ca\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.501216 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.501272 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48098b79-2446-4f86-a42a-e6f12ab783d5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.501353 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xzh\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-kube-api-access-j6xzh\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.501732 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-bound-sa-token\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.505042 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.005024385 +0000 UTC m=+142.795690317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.505654 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.508980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7b606b-caed-4b2c-8db1-092c38d05ad0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.521217 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h92w2"] Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.530110 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.544210 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.546064 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.568822 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg"] Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.570929 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 19:21:14 crc kubenswrapper[4787]: W0219 19:21:14.584250 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d9faee_2bf0_4d62_89bc_ee58fa26a36f.slice/crio-8e3d256b0a26511940474fda8cb5f17122e54e7d8e5d5722f5117ca42dcc5289 WatchSource:0}: Error finding container 8e3d256b0a26511940474fda8cb5f17122e54e7d8e5d5722f5117ca42dcc5289: Status 404 returned error can't find the container with id 8e3d256b0a26511940474fda8cb5f17122e54e7d8e5d5722f5117ca42dcc5289 Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.584931 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.585707 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb7b606b-caed-4b2c-8db1-092c38d05ad0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.587440 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.590507 4787 projected.go:194] Error preparing data for projected volume kube-api-access-4rdk9 for pod openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk: failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.590676 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb7b606b-caed-4b2c-8db1-092c38d05ad0-kube-api-access-4rdk9 podName:eb7b606b-caed-4b2c-8db1-092c38d05ad0 nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.09059018 +0000 UTC m=+142.881256122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4rdk9" (UniqueName: "kubernetes.io/projected/eb7b606b-caed-4b2c-8db1-092c38d05ad0-kube-api-access-4rdk9") pod "openshift-apiserver-operator-796bbdcf4f-gzbjk" (UID: "eb7b606b-caed-4b2c-8db1-092c38d05ad0") : failed to sync configmap cache: timed out waiting for the condition Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.610302 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.610327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.610794 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.110750762 +0000 UTC m=+142.901416714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.610914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae563ac8-197b-4860-a115-175a12fa1690-config\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.610960 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae563ac8-197b-4860-a115-175a12fa1690-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611006 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wr6q\" (UniqueName: \"kubernetes.io/projected/7dc20e80-624f-45be-90ae-f611007a3ffb-kube-api-access-4wr6q\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d87b05a-6cc4-4313-8993-ad5378bdc68f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae563ac8-197b-4860-a115-175a12fa1690-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611144 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-certificates\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f9f0394-f71b-48e3-a338-e824cdbb8c69-apiservice-cert\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611232 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-image-import-ca\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611260 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f9f0394-f71b-48e3-a338-e824cdbb8c69-tmpfs\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611313 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-trusted-ca\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611355 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-audit\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611387 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611419 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0538112-d98b-49ff-9618-654279d0ef7f-audit-dir\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611512 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611543 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dc20e80-624f-45be-90ae-f611007a3ffb-config-volume\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7dc20e80-624f-45be-90ae-f611007a3ffb-metrics-tls\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611584 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d87b05a-6cc4-4313-8993-ad5378bdc68f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611702 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9np\" (UniqueName: \"kubernetes.io/projected/8dc93068-54c2-4489-89e8-d4deaab95161-kube-api-access-vn9np\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-etcd-client\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrppn\" (UniqueName: \"kubernetes.io/projected/75fca331-369d-4e0d-92f1-848f6e0778e2-kube-api-access-zrppn\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.611875 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc93068-54c2-4489-89e8-d4deaab95161-serving-cert\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.612174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48098b79-2446-4f86-a42a-e6f12ab783d5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.612248 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0538112-d98b-49ff-9618-654279d0ef7f-node-pullsecrets\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.612281 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-csi-data-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.612344 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-dir\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkztt\" (UniqueName: \"kubernetes.io/projected/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-kube-api-access-vkztt\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613047 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825f12a8-ed8f-4a13-910c-53801339ec23-service-ca-bundle\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613141 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613336 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-default-certificate\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613357 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-plugins-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613687 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613958 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-socket-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.613979 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/221a034a-d231-46b6-b0ea-624788b21fea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.614398 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44ss\" (UniqueName: \"kubernetes.io/projected/b0d95611-e321-46d4-ba78-b847021133c9-kube-api-access-x44ss\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.614448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gr8d\" (UniqueName: \"kubernetes.io/projected/6d87b05a-6cc4-4313-8993-ad5378bdc68f-kube-api-access-8gr8d\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.614793 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-tls\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.614850 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdfg\" (UniqueName: \"kubernetes.io/projected/221a034a-d231-46b6-b0ea-624788b21fea-kube-api-access-xrdfg\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.615090 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.615155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.615268 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklvg\" (UniqueName: \"kubernetes.io/projected/1f9f0394-f71b-48e3-a338-e824cdbb8c69-kube-api-access-lklvg\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.615667 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-metrics-certs\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.615682 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48098b79-2446-4f86-a42a-e6f12ab783d5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.616102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae563ac8-197b-4860-a115-175a12fa1690-config\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.615760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-trusted-ca\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.617694 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/75fca331-369d-4e0d-92f1-848f6e0778e2-certs\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.617717 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.617955 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0d95611-e321-46d4-ba78-b847021133c9-profile-collector-cert\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.618063 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7cbe43-858b-42b9-a800-4180f6a8056d-cert\") pod \"ingress-canary-q52vk\" (UID: \"af7cbe43-858b-42b9-a800-4180f6a8056d\") " pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.618394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.619825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-trusted-ca\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.622956 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae563ac8-197b-4860-a115-175a12fa1690-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.628083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-certificates\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.633223 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.634964 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48098b79-2446-4f86-a42a-e6f12ab783d5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.637665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d87b05a-6cc4-4313-8993-ad5378bdc68f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.637719 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6d5\" (UniqueName: \"kubernetes.io/projected/a0538112-d98b-49ff-9618-654279d0ef7f-kube-api-access-wc6d5\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.637840 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc93068-54c2-4489-89e8-d4deaab95161-config\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.637896 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/75fca331-369d-4e0d-92f1-848f6e0778e2-node-bootstrap-token\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.637924 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.637950 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0d95611-e321-46d4-ba78-b847021133c9-srv-cert\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638001 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-policies\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-config\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638181 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-metrics-tls\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638223 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xnls\" (UniqueName: \"kubernetes.io/projected/af7cbe43-858b-42b9-a800-4180f6a8056d-kube-api-access-7xnls\") pod \"ingress-canary-q52vk\" (UID: \"af7cbe43-858b-42b9-a800-4180f6a8056d\") " pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638282 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638331 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdzh7\" (UniqueName: \"kubernetes.io/projected/825f12a8-ed8f-4a13-910c-53801339ec23-kube-api-access-mdzh7\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638381 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74425\" (UniqueName: \"kubernetes.io/projected/803c4ef1-5afd-477b-9833-87325d455383-kube-api-access-74425\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638460 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xzh\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-kube-api-access-j6xzh\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638496 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638576 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-mountpoint-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638667 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-encryption-config\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638703 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638803 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-stats-auth\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-bound-sa-token\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-serving-cert\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7zc\" (UniqueName: \"kubernetes.io/projected/c36f7c9f-bb17-49f7-bce8-7136e7443278-kube-api-access-cf7zc\") pod \"migrator-59844c95c7-8rgcw\" (UID: \"c36f7c9f-bb17-49f7-bce8-7136e7443278\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.638925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh956\" (UniqueName: \"kubernetes.io/projected/ac7ae820-3827-442c-83b4-aad43aa9e383-kube-api-access-rh956\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.639017 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-registration-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.639040 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/221a034a-d231-46b6-b0ea-624788b21fea-srv-cert\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.639085 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnz5\" (UniqueName: \"kubernetes.io/projected/b0a66bdd-41eb-4f60-9b98-d4d1705347da-kube-api-access-bgnz5\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.639155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.639261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f9f0394-f71b-48e3-a338-e824cdbb8c69-webhook-cert\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.663360 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.163268985 +0000 UTC m=+142.953935057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.665919 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-tls\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.671556 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48098b79-2446-4f86-a42a-e6f12ab783d5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.703896 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae563ac8-197b-4860-a115-175a12fa1690-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-szzvk\" (UID: \"ae563ac8-197b-4860-a115-175a12fa1690\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.710753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-bound-sa-token\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767170 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767450 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f9f0394-f71b-48e3-a338-e824cdbb8c69-webhook-cert\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767474 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d87b05a-6cc4-4313-8993-ad5378bdc68f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767498 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wr6q\" (UniqueName: \"kubernetes.io/projected/7dc20e80-624f-45be-90ae-f611007a3ffb-kube-api-access-4wr6q\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f9f0394-f71b-48e3-a338-e824cdbb8c69-apiservice-cert\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767545 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-image-import-ca\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767569 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f9f0394-f71b-48e3-a338-e824cdbb8c69-tmpfs\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.767595 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-audit\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768441 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0538112-d98b-49ff-9618-654279d0ef7f-audit-dir\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768502 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dc20e80-624f-45be-90ae-f611007a3ffb-config-volume\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768524 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7dc20e80-624f-45be-90ae-f611007a3ffb-metrics-tls\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768547 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768571 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d87b05a-6cc4-4313-8993-ad5378bdc68f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768597 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn9np\" (UniqueName: \"kubernetes.io/projected/8dc93068-54c2-4489-89e8-d4deaab95161-kube-api-access-vn9np\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768637 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-etcd-client\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768672 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrppn\" (UniqueName: \"kubernetes.io/projected/75fca331-369d-4e0d-92f1-848f6e0778e2-kube-api-access-zrppn\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768698 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc93068-54c2-4489-89e8-d4deaab95161-serving-cert\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768724 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0538112-d98b-49ff-9618-654279d0ef7f-node-pullsecrets\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-dir\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768804 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-csi-data-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkztt\" (UniqueName: \"kubernetes.io/projected/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-kube-api-access-vkztt\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768851 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825f12a8-ed8f-4a13-910c-53801339ec23-service-ca-bundle\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768879 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768898 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-plugins-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768965 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-default-certificate\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.768989 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769021 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/221a034a-d231-46b6-b0ea-624788b21fea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-socket-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769072 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44ss\" (UniqueName: \"kubernetes.io/projected/b0d95611-e321-46d4-ba78-b847021133c9-kube-api-access-x44ss\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769114 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gr8d\" (UniqueName: \"kubernetes.io/projected/6d87b05a-6cc4-4313-8993-ad5378bdc68f-kube-api-access-8gr8d\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrdfg\" (UniqueName: \"kubernetes.io/projected/221a034a-d231-46b6-b0ea-624788b21fea-kube-api-access-xrdfg\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769160 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769197 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lklvg\" (UniqueName: \"kubernetes.io/projected/1f9f0394-f71b-48e3-a338-e824cdbb8c69-kube-api-access-lklvg\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-metrics-certs\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769266 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/75fca331-369d-4e0d-92f1-848f6e0778e2-certs\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769308 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-trusted-ca\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769339 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0d95611-e321-46d4-ba78-b847021133c9-profile-collector-cert\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7cbe43-858b-42b9-a800-4180f6a8056d-cert\") pod \"ingress-canary-q52vk\" (UID: \"af7cbe43-858b-42b9-a800-4180f6a8056d\") " pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769391 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769455 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc6d5\" (UniqueName: \"kubernetes.io/projected/a0538112-d98b-49ff-9618-654279d0ef7f-kube-api-access-wc6d5\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769481 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d87b05a-6cc4-4313-8993-ad5378bdc68f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769517 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc93068-54c2-4489-89e8-d4deaab95161-config\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/75fca331-369d-4e0d-92f1-848f6e0778e2-node-bootstrap-token\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769566 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0d95611-e321-46d4-ba78-b847021133c9-srv-cert\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769651 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-policies\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769689 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-metrics-tls\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769711 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-config\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769737 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xnls\" (UniqueName: \"kubernetes.io/projected/af7cbe43-858b-42b9-a800-4180f6a8056d-kube-api-access-7xnls\") pod \"ingress-canary-q52vk\" (UID: \"af7cbe43-858b-42b9-a800-4180f6a8056d\") " pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769773 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769798 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769852 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdzh7\" (UniqueName: \"kubernetes.io/projected/825f12a8-ed8f-4a13-910c-53801339ec23-kube-api-access-mdzh7\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74425\" (UniqueName: \"kubernetes.io/projected/803c4ef1-5afd-477b-9833-87325d455383-kube-api-access-74425\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.769978 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-mountpoint-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-encryption-config\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770027 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770061 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-stats-auth\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770091 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7zc\" (UniqueName: \"kubernetes.io/projected/c36f7c9f-bb17-49f7-bce8-7136e7443278-kube-api-access-cf7zc\") pod \"migrator-59844c95c7-8rgcw\" (UID: \"c36f7c9f-bb17-49f7-bce8-7136e7443278\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770114 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh956\" (UniqueName: \"kubernetes.io/projected/ac7ae820-3827-442c-83b4-aad43aa9e383-kube-api-access-rh956\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770143 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-serving-cert\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770191 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-registration-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770214 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/221a034a-d231-46b6-b0ea-624788b21fea-srv-cert\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.770241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgnz5\" (UniqueName: \"kubernetes.io/projected/b0a66bdd-41eb-4f60-9b98-d4d1705347da-kube-api-access-bgnz5\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.772131 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-image-import-ca\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.775052 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.274997039 +0000 UTC m=+143.065662981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.775926 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f9f0394-f71b-48e3-a338-e824cdbb8c69-tmpfs\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.776531 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-audit\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.776660 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0538112-d98b-49ff-9618-654279d0ef7f-audit-dir\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.779026 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f9f0394-f71b-48e3-a338-e824cdbb8c69-apiservice-cert\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.779727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.783764 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dc20e80-624f-45be-90ae-f611007a3ffb-config-volume\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.786208 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-csi-data-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.786769 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d87b05a-6cc4-4313-8993-ad5378bdc68f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.788131 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc93068-54c2-4489-89e8-d4deaab95161-config\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.788202 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0538112-d98b-49ff-9618-654279d0ef7f-node-pullsecrets\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.788232 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-dir\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.790714 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xzh\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-kube-api-access-j6xzh\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.791949 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-trusted-ca\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.792373 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc93068-54c2-4489-89e8-d4deaab95161-serving-cert\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.792885 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7dc20e80-624f-45be-90ae-f611007a3ffb-metrics-tls\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.800980 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.801083 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-mountpoint-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.803444 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af7cbe43-858b-42b9-a800-4180f6a8056d-cert\") pod \"ingress-canary-q52vk\" (UID: \"af7cbe43-858b-42b9-a800-4180f6a8056d\") " pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.803729 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.804890 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.805274 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/75fca331-369d-4e0d-92f1-848f6e0778e2-node-bootstrap-token\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.805878 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.809511 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.811462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.815747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d87b05a-6cc4-4313-8993-ad5378bdc68f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.817309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.817666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b0d95611-e321-46d4-ba78-b847021133c9-profile-collector-cert\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.818588 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-config\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.818940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-plugins-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.818743 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.819310 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.819920 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825f12a8-ed8f-4a13-910c-53801339ec23-service-ca-bundle\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.820064 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-registration-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.820691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-policies\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.821331 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-metrics-certs\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.821963 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.822145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/803c4ef1-5afd-477b-9833-87325d455383-socket-dir\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.823510 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0538112-d98b-49ff-9618-654279d0ef7f-etcd-serving-ca\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.827032 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/75fca331-369d-4e0d-92f1-848f6e0778e2-certs\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.841669 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/221a034a-d231-46b6-b0ea-624788b21fea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.841669 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.842109 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgnz5\" (UniqueName: \"kubernetes.io/projected/b0a66bdd-41eb-4f60-9b98-d4d1705347da-kube-api-access-bgnz5\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.842882 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-serving-cert\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.843375 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.843457 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-encryption-config\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.843674 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-default-certificate\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.844067 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-metrics-tls\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.844217 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/825f12a8-ed8f-4a13-910c-53801339ec23-stats-auth\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.850394 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wr6q\" (UniqueName: \"kubernetes.io/projected/7dc20e80-624f-45be-90ae-f611007a3ffb-kube-api-access-4wr6q\") pod \"dns-default-5c8rt\" (UID: \"7dc20e80-624f-45be-90ae-f611007a3ffb\") " pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.853123 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kpxw8\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.853479 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.853748 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/221a034a-d231-46b6-b0ea-624788b21fea-srv-cert\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.859450 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lklvg\" (UniqueName: \"kubernetes.io/projected/1f9f0394-f71b-48e3-a338-e824cdbb8c69-kube-api-access-lklvg\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.862320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0538112-d98b-49ff-9618-654279d0ef7f-etcd-client\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.864668 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f9f0394-f71b-48e3-a338-e824cdbb8c69-webhook-cert\") pod \"packageserver-d55dfcdfc-p9djm\" (UID: \"1f9f0394-f71b-48e3-a338-e824cdbb8c69\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.866065 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn9np\" (UniqueName: \"kubernetes.io/projected/8dc93068-54c2-4489-89e8-d4deaab95161-kube-api-access-vn9np\") pod \"service-ca-operator-777779d784-6cmmg\" (UID: \"8dc93068-54c2-4489-89e8-d4deaab95161\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.867789 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b0d95611-e321-46d4-ba78-b847021133c9-srv-cert\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.874025 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.874770 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.374753759 +0000 UTC m=+143.165419701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.886875 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrppn\" (UniqueName: \"kubernetes.io/projected/75fca331-369d-4e0d-92f1-848f6e0778e2-kube-api-access-zrppn\") pod \"machine-config-server-dfvsh\" (UID: \"75fca331-369d-4e0d-92f1-848f6e0778e2\") " pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.909306 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc6d5\" (UniqueName: \"kubernetes.io/projected/a0538112-d98b-49ff-9618-654279d0ef7f-kube-api-access-wc6d5\") pod \"apiserver-76f77b778f-qz8z6\" (UID: \"a0538112-d98b-49ff-9618-654279d0ef7f\") " pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.933243 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d87b05a-6cc4-4313-8993-ad5378bdc68f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.952654 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdzh7\" (UniqueName: \"kubernetes.io/projected/825f12a8-ed8f-4a13-910c-53801339ec23-kube-api-access-mdzh7\") pod \"router-default-5444994796-29dzb\" (UID: \"825f12a8-ed8f-4a13-910c-53801339ec23\") " pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.976033 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:14 crc kubenswrapper[4787]: E0219 19:21:14.976577 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.476555416 +0000 UTC m=+143.267221358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.982303 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74425\" (UniqueName: \"kubernetes.io/projected/803c4ef1-5afd-477b-9833-87325d455383-kube-api-access-74425\") pod \"csi-hostpathplugin-6xptg\" (UID: \"803c4ef1-5afd-477b-9833-87325d455383\") " pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:14 crc kubenswrapper[4787]: I0219 19:21:14.982893 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.012763 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.022260 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.016125 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7bpzk"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.013036 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xnls\" (UniqueName: \"kubernetes.io/projected/af7cbe43-858b-42b9-a800-4180f6a8056d-kube-api-access-7xnls\") pod \"ingress-canary-q52vk\" (UID: \"af7cbe43-858b-42b9-a800-4180f6a8056d\") " pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.032175 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.053721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7zc\" (UniqueName: \"kubernetes.io/projected/c36f7c9f-bb17-49f7-bce8-7136e7443278-kube-api-access-cf7zc\") pod \"migrator-59844c95c7-8rgcw\" (UID: \"c36f7c9f-bb17-49f7-bce8-7136e7443278\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.055342 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.077654 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkztt\" (UniqueName: \"kubernetes.io/projected/290ac79a-db6b-4fbb-b0ee-e4b0a397a312-kube-api-access-vkztt\") pod \"ingress-operator-5b745b69d9-gtnqp\" (UID: \"290ac79a-db6b-4fbb-b0ee-e4b0a397a312\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.079126 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.079550 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.579534976 +0000 UTC m=+143.370200918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.080793 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh956\" (UniqueName: \"kubernetes.io/projected/ac7ae820-3827-442c-83b4-aad43aa9e383-kube-api-access-rh956\") pod \"oauth-openshift-558db77b4-gcqj4\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.082967 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.090565 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.105416 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.105759 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.107024 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gr8d\" (UniqueName: \"kubernetes.io/projected/6d87b05a-6cc4-4313-8993-ad5378bdc68f-kube-api-access-8gr8d\") pod \"cluster-image-registry-operator-dc59b4c8b-brdgc\" (UID: \"6d87b05a-6cc4-4313-8993-ad5378bdc68f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.119467 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrdfg\" (UniqueName: \"kubernetes.io/projected/221a034a-d231-46b6-b0ea-624788b21fea-kube-api-access-xrdfg\") pod \"olm-operator-6b444d44fb-kwwj8\" (UID: \"221a034a-d231-46b6-b0ea-624788b21fea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.120260 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.122842 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dfvsh" Feb 19 19:21:15 crc kubenswrapper[4787]: W0219 19:21:15.132520 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2854140a_273c_41e8_9a78_611994b05d26.slice/crio-a44855969f69545461199fa672c9a654e64755c891ea81a147ef7021dd78521e WatchSource:0}: Error finding container a44855969f69545461199fa672c9a654e64755c891ea81a147ef7021dd78521e: Status 404 returned error can't find the container with id a44855969f69545461199fa672c9a654e64755c891ea81a147ef7021dd78521e Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.143006 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q52vk" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.147378 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.172953 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44ss\" (UniqueName: \"kubernetes.io/projected/b0d95611-e321-46d4-ba78-b847021133c9-kube-api-access-x44ss\") pod \"catalog-operator-68c6474976-nh5mn\" (UID: \"b0d95611-e321-46d4-ba78-b847021133c9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.173226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" event={"ID":"1f465edc-b94b-4a9d-9f9c-1540bb933c8d","Type":"ContainerStarted","Data":"f2945d5fb9e308e9e1e965c72670e8abde136ddeea955714749ecf1e668cd737"} Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.180253 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.180600 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rdk9\" (UniqueName: \"kubernetes.io/projected/eb7b606b-caed-4b2c-8db1-092c38d05ad0-kube-api-access-4rdk9\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.187772 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.687739551 +0000 UTC m=+143.478405493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.188478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rdk9\" (UniqueName: \"kubernetes.io/projected/eb7b606b-caed-4b2c-8db1-092c38d05ad0-kube-api-access-4rdk9\") pod \"openshift-apiserver-operator-796bbdcf4f-gzbjk\" (UID: \"eb7b606b-caed-4b2c-8db1-092c38d05ad0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:15 crc kubenswrapper[4787]: W0219 19:21:15.266409 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69c5b2ea_5cb5_40ca_ba80_0e7266b80143.slice/crio-f74d116cb3a9ebf6122eca3e4d90f6728b0e0ac75e9a98e73eda482243527a4b WatchSource:0}: Error finding container f74d116cb3a9ebf6122eca3e4d90f6728b0e0ac75e9a98e73eda482243527a4b: Status 404 returned error can't find the container with id f74d116cb3a9ebf6122eca3e4d90f6728b0e0ac75e9a98e73eda482243527a4b Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.267484 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" event={"ID":"76d9faee-2bf0-4d62-89bc-ee58fa26a36f","Type":"ContainerStarted","Data":"8e3d256b0a26511940474fda8cb5f17122e54e7d8e5d5722f5117ca42dcc5289"} Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.282637 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.283041 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.783028187 +0000 UTC m=+143.573694129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.298848 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" event={"ID":"32ca8e62-696d-4f05-9ba2-b8fbc20e407f","Type":"ContainerStarted","Data":"c03225a17bff3615ce528e5bf2ba637e9189d79a6d3e090fefcecf0d21d237b0"} Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.299061 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.308569 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.339996 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.357559 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.376422 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.377546 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.387555 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.393761 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:15.893699391 +0000 UTC m=+143.684365333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.394074 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.409331 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-shj95"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.413368 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.415556 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.430753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" event={"ID":"80ece0e8-135e-410d-b1ea-ca5ee78a3e3c","Type":"ContainerStarted","Data":"50a56fe46a53c777448a1e088855079c5762cad1db4003ecf123a52c8eeddcb1"} Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.433578 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h92w2" event={"ID":"70c356be-c7d4-479a-a357-4cfe97e5e9c9","Type":"ContainerStarted","Data":"295b80b8860738632d1f7adbd3618b72e89e1c1047b46887930a5bc2bba49245"} Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.433626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h92w2" event={"ID":"70c356be-c7d4-479a-a357-4cfe97e5e9c9","Type":"ContainerStarted","Data":"800e3ff5699332140f4d749cc3b3b55278be4c1679ebc1f2a672a189e027e00a"} Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.437676 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.437718 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.443924 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-2sgql"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.458541 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.473741 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.492955 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.498435 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.511974 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.512579 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.012562094 +0000 UTC m=+143.803228036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.516565 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h6jhc"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.624573 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.625307 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.125258034 +0000 UTC m=+143.915924006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.625646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.629034 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.129010629 +0000 UTC m=+143.919676761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: W0219 19:21:15.714555 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096ef5e0_07a5_4d8d_b217_c91a220b54b3.slice/crio-17ec840690bde5421dc1ae36158687383e4d304578908ed732bc0306c020dd57 WatchSource:0}: Error finding container 17ec840690bde5421dc1ae36158687383e4d304578908ed732bc0306c020dd57: Status 404 returned error can't find the container with id 17ec840690bde5421dc1ae36158687383e4d304578908ed732bc0306c020dd57 Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.728982 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.729317 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.229290274 +0000 UTC m=+144.019956216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.742307 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tm84p"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.746185 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.774505 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf"] Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.804032 4787 csr.go:261] certificate signing request csr-vjwr6 is approved, waiting to be issued Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.818358 4787 csr.go:257] certificate signing request csr-vjwr6 is issued Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.834674 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.835040 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.33502483 +0000 UTC m=+144.125690772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.901708 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" podStartSLOduration=122.901686618 podStartE2EDuration="2m2.901686618s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:15.901199374 +0000 UTC m=+143.691865316" watchObservedRunningTime="2026-02-19 19:21:15.901686618 +0000 UTC m=+143.692352560" Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.948074 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.953818 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.453770309 +0000 UTC m=+144.244436251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:15 crc kubenswrapper[4787]: I0219 19:21:15.965371 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:15 crc kubenswrapper[4787]: E0219 19:21:15.965993 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.465971199 +0000 UTC m=+144.256637141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.068853 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.069369 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.56935042 +0000 UTC m=+144.360016362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.119924 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.169891 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.177532 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.677512005 +0000 UTC m=+144.468177937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.189182 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.244351 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podStartSLOduration=123.244329847 podStartE2EDuration="2m3.244329847s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.23187355 +0000 UTC m=+144.022539492" watchObservedRunningTime="2026-02-19 19:21:16.244329847 +0000 UTC m=+144.034995789" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.288830 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.289346 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.789326471 +0000 UTC m=+144.579992413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.289547 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" podStartSLOduration=122.289529776 podStartE2EDuration="2m2.289529776s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.262344689 +0000 UTC m=+144.053010651" watchObservedRunningTime="2026-02-19 19:21:16.289529776 +0000 UTC m=+144.080195728" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.360472 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6cngl" podStartSLOduration=123.360451773 podStartE2EDuration="2m3.360451773s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.335530818 +0000 UTC m=+144.126196770" watchObservedRunningTime="2026-02-19 19:21:16.360451773 +0000 UTC m=+144.151117715" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.390138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.390690 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.890668045 +0000 UTC m=+144.681333987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.416860 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.427923 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q52vk"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.441450 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5c8rt"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.450335 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tmmbh" podStartSLOduration=123.450311367 podStartE2EDuration="2m3.450311367s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.448837006 +0000 UTC m=+144.239502958" watchObservedRunningTime="2026-02-19 19:21:16.450311367 +0000 UTC m=+144.240977309" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.489183 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tmcl8" podStartSLOduration=123.48915603 podStartE2EDuration="2m3.48915603s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.486176667 +0000 UTC m=+144.276842609" watchObservedRunningTime="2026-02-19 19:21:16.48915603 +0000 UTC m=+144.279821972" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.490890 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.491125 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.991089724 +0000 UTC m=+144.781755666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.491388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.491883 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:16.991873245 +0000 UTC m=+144.782539187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.512545 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-h92w2" podStartSLOduration=123.512515781 podStartE2EDuration="2m3.512515781s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.50999251 +0000 UTC m=+144.300658472" watchObservedRunningTime="2026-02-19 19:21:16.512515781 +0000 UTC m=+144.303181723" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.532454 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" event={"ID":"69c5b2ea-5cb5-40ca-ba80-0e7266b80143","Type":"ContainerStarted","Data":"f74d116cb3a9ebf6122eca3e4d90f6728b0e0ac75e9a98e73eda482243527a4b"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.538436 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qz8z6"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.554241 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" event={"ID":"607f701f-0640-4730-a751-da18d229b3f8","Type":"ContainerStarted","Data":"cde8d86d44dae75e4f79d4c16503bc17bcc298c4de7f8f889f5485ecc97ebad9"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.558550 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" event={"ID":"5a448d36-c533-49ee-8815-d8190936ac39","Type":"ContainerStarted","Data":"3d35a903c0a01152be6edc8fdecba94273d38e0fdd531fde7f9e2bd68aa0208e"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.581007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" event={"ID":"2854140a-273c-41e8-9a78-611994b05d26","Type":"ContainerStarted","Data":"a44855969f69545461199fa672c9a654e64755c891ea81a147ef7021dd78521e"} Feb 19 19:21:16 crc kubenswrapper[4787]: W0219 19:21:16.596104 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae563ac8_197b_4860_a115_175a12fa1690.slice/crio-7a873ec25bade8133ce31f5106ba289f83d3fc29e0a311a030b696b447fbdbdb WatchSource:0}: Error finding container 7a873ec25bade8133ce31f5106ba289f83d3fc29e0a311a030b696b447fbdbdb: Status 404 returned error can't find the container with id 7a873ec25bade8133ce31f5106ba289f83d3fc29e0a311a030b696b447fbdbdb Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.597126 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.599834 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.099795663 +0000 UTC m=+144.890461605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.601023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.610538 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" event={"ID":"5fbae455-2136-403f-bda5-236a4de586da","Type":"ContainerStarted","Data":"840bfeedb25c3f8008fb3e0b9169bd86b81c94d34e144c598a589572912ebb76"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.657812 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" event={"ID":"fe0d7abc-7a42-4ba4-8403-c6b9dd202217","Type":"ContainerStarted","Data":"b7b2cec7e24c7c6c8016632358a25f814c849b0edfd536abe7af1a6ace03df82"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.704429 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.704953 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.204927783 +0000 UTC m=+144.995593725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.714046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dfvsh" event={"ID":"75fca331-369d-4e0d-92f1-848f6e0778e2","Type":"ContainerStarted","Data":"8bd44b646bccecd039a058ca2412241046db77d870b8c79ff341e94de0c26cc6"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.744177 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" event={"ID":"096ef5e0-07a5-4d8d-b217-c91a220b54b3","Type":"ContainerStarted","Data":"17ec840690bde5421dc1ae36158687383e4d304578908ed732bc0306c020dd57"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.745872 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xplkr" podStartSLOduration=122.745838523 podStartE2EDuration="2m2.745838523s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.706296411 +0000 UTC m=+144.496962353" watchObservedRunningTime="2026-02-19 19:21:16.745838523 +0000 UTC m=+144.536504465" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.746702 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4xqxs" podStartSLOduration=123.746694387 podStartE2EDuration="2m3.746694387s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:16.742480479 +0000 UTC m=+144.533146421" watchObservedRunningTime="2026-02-19 19:21:16.746694387 +0000 UTC m=+144.537360329" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.746836 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kpxw8"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.787556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-29dzb" event={"ID":"825f12a8-ed8f-4a13-910c-53801339ec23","Type":"ContainerStarted","Data":"a7b745163644b9b178652983a5d985816d102720171ff746ed97232404aa5b99"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.790706 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.800395 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" event={"ID":"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3","Type":"ContainerStarted","Data":"c6ec2cf66ea853647bc34ab55db282969789dc64e3c8d281f4eae0f29feeb269"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.814057 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.814328 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" event={"ID":"76d9faee-2bf0-4d62-89bc-ee58fa26a36f","Type":"ContainerStarted","Data":"6649232f90dcf0551436dfe667ef273335e067197da5761d043d2c4b889d1487"} Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.816081 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.316031359 +0000 UTC m=+145.106697301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.819733 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 19:16:15 +0000 UTC, rotation deadline is 2027-01-04 22:44:51.741898283 +0000 UTC Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.819822 4787 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7659h23m34.922079758s for next certificate rotation Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.840855 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" event={"ID":"92a6d23d-0714-44b6-9e17-b9d5d93824c1","Type":"ContainerStarted","Data":"74498d95fd998cdf8e86fd4bf7f0011a8093a8a83ba9e55a20df4af4b1014967"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.842436 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.917782 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:16 crc kubenswrapper[4787]: E0219 19:21:16.920796 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.420780288 +0000 UTC m=+145.211446230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.964360 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6xptg"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.964430 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8"] Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.973323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" event={"ID":"1f465edc-b94b-4a9d-9f9c-1540bb933c8d","Type":"ContainerStarted","Data":"6e599c7798dffc36b480cb34b1a471ca574ebbe54ba9d89f6f643563ca9746ff"} Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.973493 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:16 crc kubenswrapper[4787]: I0219 19:21:16.985918 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" event={"ID":"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9","Type":"ContainerStarted","Data":"ea48a0dc9b921cac52f69bb1ab9df92bd8d9fd03050e5d6e0a0eb453a5f294ba"} Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.010617 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gcqj4"] Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.023062 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.024559 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.5245387 +0000 UTC m=+145.315204642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.032089 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" event={"ID":"9ded0889-3bf6-4276-9af2-a7a81df383ea","Type":"ContainerStarted","Data":"08e774db477f70a82e93d034e927cf6cf6fca77643a966727dcb9e5b683315d3"} Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.033353 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.034764 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk"] Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.044173 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.044249 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.106215 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" event={"ID":"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5","Type":"ContainerStarted","Data":"ece777e149c382ae7e88f919717e69b804077c035717b4ad669b874e622b27df"} Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.125966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.126740 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.626716587 +0000 UTC m=+145.417382529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.130882 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.130957 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.159040 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.181636 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podStartSLOduration=124.181589267 podStartE2EDuration="2m4.181589267s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:17.17237915 +0000 UTC m=+144.963045092" watchObservedRunningTime="2026-02-19 19:21:17.181589267 +0000 UTC m=+144.972255209" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.183442 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp"] Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.196991 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" podStartSLOduration=123.196965455 podStartE2EDuration="2m3.196965455s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:17.194211328 +0000 UTC m=+144.984877260" watchObservedRunningTime="2026-02-19 19:21:17.196965455 +0000 UTC m=+144.987631397" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.227269 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.228922 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.728899535 +0000 UTC m=+145.519565477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.319839 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wl9fg" podStartSLOduration=124.316948309 podStartE2EDuration="2m4.316948309s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:17.278971321 +0000 UTC m=+145.069637253" watchObservedRunningTime="2026-02-19 19:21:17.316948309 +0000 UTC m=+145.107614251" Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.329442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.329876 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.829861789 +0000 UTC m=+145.620527731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.431099 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.432013 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.931977195 +0000 UTC m=+145.722643137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.432476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.432887 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:17.932856609 +0000 UTC m=+145.723522551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.534187 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.534401 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.034374018 +0000 UTC m=+145.825039960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.535082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.535778 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.035760147 +0000 UTC m=+145.826426089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.636414 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.637148 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.137106591 +0000 UTC m=+145.927772533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.738749 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.739191 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.239174106 +0000 UTC m=+146.029840048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.846304 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.847916 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.347870215 +0000 UTC m=+146.138536317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:17 crc kubenswrapper[4787]: I0219 19:21:17.949032 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:17 crc kubenswrapper[4787]: E0219 19:21:17.949493 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.449472786 +0000 UTC m=+146.240138728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.047971 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:18 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:18 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:18 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.048389 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.049881 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.050054 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.550019227 +0000 UTC m=+146.340685169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.050336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.050761 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.550746598 +0000 UTC m=+146.341412530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.151190 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.151692 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.6516713 +0000 UTC m=+146.442337242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.209038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" event={"ID":"607f701f-0640-4730-a751-da18d229b3f8","Type":"ContainerStarted","Data":"3678a77255e2f61d0d669676d9ede63f3fe273d74a7abe0b7176068440b012e8"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.209105 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" event={"ID":"607f701f-0640-4730-a751-da18d229b3f8","Type":"ContainerStarted","Data":"70e59cc4e8abfce37a98e1ce5768e30ba381ad9f3d03cd77ee1bad5ebb340903"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.210870 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" event={"ID":"290ac79a-db6b-4fbb-b0ee-e4b0a397a312","Type":"ContainerStarted","Data":"412550a1c6627b3bffce9777642c0a65a35154839c36c0394279e5bdb38cc031"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.211668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" event={"ID":"a0538112-d98b-49ff-9618-654279d0ef7f","Type":"ContainerStarted","Data":"8dc2f80837a80bc7ca7d823f8ff519ff7ded50890af7f59c8c370f3a2c735fef"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.225396 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" event={"ID":"ac7ae820-3827-442c-83b4-aad43aa9e383","Type":"ContainerStarted","Data":"e3ffdde6a9da8d7f8a6df5d156f7c30c241e1f987509377ad70e4c101244d39e"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.246961 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-29dzb" podStartSLOduration=125.246930765 podStartE2EDuration="2m5.246930765s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:17.316408364 +0000 UTC m=+145.107074316" watchObservedRunningTime="2026-02-19 19:21:18.246930765 +0000 UTC m=+146.037596707" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.258872 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-z7nbp" podStartSLOduration=124.258835927 podStartE2EDuration="2m4.258835927s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.238064008 +0000 UTC m=+146.028729950" watchObservedRunningTime="2026-02-19 19:21:18.258835927 +0000 UTC m=+146.049501869" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.264130 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.265676 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" event={"ID":"6d87b05a-6cc4-4313-8993-ad5378bdc68f","Type":"ContainerStarted","Data":"16dc58212ce158d8405e644da93f16e90cd310ecce1480045d172e897c522a19"} Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.290915 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.79088985 +0000 UTC m=+146.581555782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.332425 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dfvsh" event={"ID":"75fca331-369d-4e0d-92f1-848f6e0778e2","Type":"ContainerStarted","Data":"0645d0e681bea3496607ef780c07977127b6d5cfcf5bba704812a669d0c36221"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.363850 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" event={"ID":"8dc93068-54c2-4489-89e8-d4deaab95161","Type":"ContainerStarted","Data":"18267ad35b95212a9d9dbc0fab8d12d2c3ef200606bf97c9e82bf5a5be44af2c"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.363899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" event={"ID":"8dc93068-54c2-4489-89e8-d4deaab95161","Type":"ContainerStarted","Data":"c8c294313b511b92366da6b7cfe95fa6ea3aa1005448272ef2e7fdf7b156a827"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.364950 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.365251 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.865234482 +0000 UTC m=+146.655900424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.370389 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dfvsh" podStartSLOduration=7.370345434 podStartE2EDuration="7.370345434s" podCreationTimestamp="2026-02-19 19:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.364282905 +0000 UTC m=+146.154948847" watchObservedRunningTime="2026-02-19 19:21:18.370345434 +0000 UTC m=+146.161011366" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.395278 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cmmg" podStartSLOduration=124.395255559 podStartE2EDuration="2m4.395255559s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.394398015 +0000 UTC m=+146.185063957" watchObservedRunningTime="2026-02-19 19:21:18.395255559 +0000 UTC m=+146.185921501" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.404741 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" event={"ID":"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5","Type":"ContainerStarted","Data":"f0b255258f985c70a9692a8f775373e24765d43a1f661523ae7d20850feb00ac"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.427731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" event={"ID":"c36f7c9f-bb17-49f7-bce8-7136e7443278","Type":"ContainerStarted","Data":"8f08fcda52c87349f57ef2e12e27b50a99bc2908755f638128999a99c62c7555"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.437523 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" event={"ID":"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9","Type":"ContainerStarted","Data":"6e29b882ba940f3768c12668125297bb96fc8705cca58c42c4255645ca210497"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.444869 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" podStartSLOduration=125.44483991 podStartE2EDuration="2m5.44483991s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.442179296 +0000 UTC m=+146.232845228" watchObservedRunningTime="2026-02-19 19:21:18.44483991 +0000 UTC m=+146.235505852" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.450426 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" event={"ID":"2ddc3bbc-6aa1-4414-a35b-ce99cb58f5f3","Type":"ContainerStarted","Data":"1d9703d9877c5be83149b965c6cd42ea5938e0a47f8796d7a327742f9a9f45ed"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.466762 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.467440 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:18.96741777 +0000 UTC m=+146.758083712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.471904 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" event={"ID":"b0d95611-e321-46d4-ba78-b847021133c9","Type":"ContainerStarted","Data":"defb815cf3312a4b11fc25603fb8fcc52503f55e4babc635103fd68cfba164d1"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.519496 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" event={"ID":"096ef5e0-07a5-4d8d-b217-c91a220b54b3","Type":"ContainerStarted","Data":"46ca6184554e9dc797bd204fd0685a9eebaf6cc19ade6d8b51ca171427271e05"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.536970 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" event={"ID":"ae563ac8-197b-4860-a115-175a12fa1690","Type":"ContainerStarted","Data":"7a873ec25bade8133ce31f5106ba289f83d3fc29e0a311a030b696b447fbdbdb"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.557310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" event={"ID":"fe0d7abc-7a42-4ba4-8403-c6b9dd202217","Type":"ContainerStarted","Data":"c20ba0523a45b8eaddbedb8889475cff9adb7d08cc14f5e6311b7bbcbbe51e67"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.569013 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.570534 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.070517113 +0000 UTC m=+146.861183055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.589380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" event={"ID":"eb7b606b-caed-4b2c-8db1-092c38d05ad0","Type":"ContainerStarted","Data":"c5405c3e859a2bdd67abd5bcc40553856e215932230d7a827c1d7d6cfea12ecb"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.601705 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" event={"ID":"eb7b606b-caed-4b2c-8db1-092c38d05ad0","Type":"ContainerStarted","Data":"b80d4b3a9944a39673398ff38ff71b06e90bf348fab3a07270a53593dc035e5b"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.615829 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bxj92" podStartSLOduration=124.615806205 podStartE2EDuration="2m4.615806205s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.498207568 +0000 UTC m=+146.288873510" watchObservedRunningTime="2026-02-19 19:21:18.615806205 +0000 UTC m=+146.406472147" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.649076 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" event={"ID":"1f9f0394-f71b-48e3-a338-e824cdbb8c69","Type":"ContainerStarted","Data":"493c53de7029af3783afa793495b5484e2379540a76990fdaa0b08a849a712f3"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.649138 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" event={"ID":"1f9f0394-f71b-48e3-a338-e824cdbb8c69","Type":"ContainerStarted","Data":"ce54e272348a763cd2f69948bca0c6c0e5e3c11d67ae5d59cc4cb434c3aa2d67"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.650976 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.660375 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.660436 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.670184 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" event={"ID":"69c5b2ea-5cb5-40ca-ba80-0e7266b80143","Type":"ContainerStarted","Data":"39e573512ddd1c0e49c2ac4849257fd1fd7f23159428be33fbe944a30a0bd735"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.671866 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.673953 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-h6jhc" podStartSLOduration=125.673931295 podStartE2EDuration="2m5.673931295s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.614989472 +0000 UTC m=+146.405655424" watchObservedRunningTime="2026-02-19 19:21:18.673931295 +0000 UTC m=+146.464597227" Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.674315 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.174296455 +0000 UTC m=+146.964962637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.702224 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" event={"ID":"1f465edc-b94b-4a9d-9f9c-1540bb933c8d","Type":"ContainerStarted","Data":"5aab2e5e1fa430eb035e938bb2c1ace6bc870397f86403cabb7e3ce416bac842"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.717933 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" event={"ID":"803c4ef1-5afd-477b-9833-87325d455383","Type":"ContainerStarted","Data":"1b5ac20f1c7cafdafbb188385ca9c8e5287349f0721e555d8fbad0b9eafd4915"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.758903 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" podStartSLOduration=125.758880722 podStartE2EDuration="2m5.758880722s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.679085248 +0000 UTC m=+146.469751190" watchObservedRunningTime="2026-02-19 19:21:18.758880722 +0000 UTC m=+146.549546664" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.776219 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.777769 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.277744228 +0000 UTC m=+147.068410170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.817946 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzbjk" podStartSLOduration=125.81791118699999 podStartE2EDuration="2m5.817911187s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.760281731 +0000 UTC m=+146.550947673" watchObservedRunningTime="2026-02-19 19:21:18.817911187 +0000 UTC m=+146.608577129" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.840191 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5c8rt" event={"ID":"7dc20e80-624f-45be-90ae-f611007a3ffb","Type":"ContainerStarted","Data":"ce2c1ffee19d9b29b846d2b6c066c698c205bf7a6ca7467302577a0ce7d7784b"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.840716 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5c8rt" event={"ID":"7dc20e80-624f-45be-90ae-f611007a3ffb","Type":"ContainerStarted","Data":"37c662484c6cd01666a0f06e88f7c665fd4abaa0d778a400e13471c3d1142545"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.853770 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" event={"ID":"b0a66bdd-41eb-4f60-9b98-d4d1705347da","Type":"ContainerStarted","Data":"79201305a4650fb99ade5315e49a033c38cfe72be01771742bc934ae9117c973"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.853831 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" event={"ID":"b0a66bdd-41eb-4f60-9b98-d4d1705347da","Type":"ContainerStarted","Data":"4c247a8471958017d2f0b503ee41e5b0b0fe763842b1401a5151ac5e06a713b2"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.863806 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.873929 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kpxw8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.874009 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.886265 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.887119 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.387103506 +0000 UTC m=+147.177769448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.887902 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qf2tm" podStartSLOduration=124.887872377 podStartE2EDuration="2m4.887872377s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.80903469 +0000 UTC m=+146.599700632" watchObservedRunningTime="2026-02-19 19:21:18.887872377 +0000 UTC m=+146.678538319" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.916769 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q52vk" event={"ID":"af7cbe43-858b-42b9-a800-4180f6a8056d","Type":"ContainerStarted","Data":"5df2101274205f795b9ef2d22a0158ead510f2e06759031147fd6f593ec1a5cb"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.917055 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q52vk" event={"ID":"af7cbe43-858b-42b9-a800-4180f6a8056d","Type":"ContainerStarted","Data":"39b517fedc51fa94ffc0dd909e580fe1311e07defb757fbfbb4e5e122e9595c7"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.933383 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podStartSLOduration=124.933355665 podStartE2EDuration="2m4.933355665s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.887032714 +0000 UTC m=+146.677698666" watchObservedRunningTime="2026-02-19 19:21:18.933355665 +0000 UTC m=+146.724021607" Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.940986 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" event={"ID":"2854140a-273c-41e8-9a78-611994b05d26","Type":"ContainerStarted","Data":"cb9bb87f17d9c3d7646a4dfd5dfddb50555c91d4dec2cf460f8580ca1749ebdf"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.965437 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" event={"ID":"92a6d23d-0714-44b6-9e17-b9d5d93824c1","Type":"ContainerStarted","Data":"d85ffe2433a12779e651f7376a0ddba133642d57bb350ed4e0cc6c6305dbb308"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.982317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" event={"ID":"9ded0889-3bf6-4276-9af2-a7a81df383ea","Type":"ContainerStarted","Data":"54fd464052c3882f14750037dba5bc02132ca7307733088ebbfc672e94446a4b"} Feb 19 19:21:18 crc kubenswrapper[4787]: I0219 19:21:18.987676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:18 crc kubenswrapper[4787]: E0219 19:21:18.989708 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.489677574 +0000 UTC m=+147.280343516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.000183 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" podStartSLOduration=125.000154696 podStartE2EDuration="2m5.000154696s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:18.945558475 +0000 UTC m=+146.736224417" watchObservedRunningTime="2026-02-19 19:21:19.000154696 +0000 UTC m=+146.790820638" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.048634 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" event={"ID":"5a448d36-c533-49ee-8815-d8190936ac39","Type":"ContainerStarted","Data":"420d14b470fd45ceaed8543493a2997ba847ccc6c7582aa7a62ebc36387a7da5"} Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.052391 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q52vk" podStartSLOduration=7.052373531 podStartE2EDuration="7.052373531s" podCreationTimestamp="2026-02-19 19:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:19.000513986 +0000 UTC m=+146.791179928" watchObservedRunningTime="2026-02-19 19:21:19.052373531 +0000 UTC m=+146.843039473" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.054902 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:19 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:19 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:19 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.054979 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.094849 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.099052 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" event={"ID":"5fbae455-2136-403f-bda5-236a4de586da","Type":"ContainerStarted","Data":"0415994b84c9d298e44d7b81c7477ff9a8c8a143b1c7fdec56820071c04d04e6"} Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.099783 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.599761942 +0000 UTC m=+147.390427884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.101653 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" podStartSLOduration=125.101633184 podStartE2EDuration="2m5.101633184s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:19.063231404 +0000 UTC m=+146.853897346" watchObservedRunningTime="2026-02-19 19:21:19.101633184 +0000 UTC m=+146.892299126" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.111822 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tm84p" podStartSLOduration=125.111800107 podStartE2EDuration="2m5.111800107s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:19.100197464 +0000 UTC m=+146.890863406" watchObservedRunningTime="2026-02-19 19:21:19.111800107 +0000 UTC m=+146.902466049" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.124835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" event={"ID":"221a034a-d231-46b6-b0ea-624788b21fea","Type":"ContainerStarted","Data":"fd0171f9b84a1e1874a3baed562523a42940b3337a0c37aa85adedba2938b4c2"} Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.124885 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.145071 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwwj8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.145564 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podUID="221a034a-d231-46b6-b0ea-624788b21fea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.164990 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-29dzb" event={"ID":"825f12a8-ed8f-4a13-910c-53801339ec23","Type":"ContainerStarted","Data":"adcc1a8b9a008894f392e2d965ded82980d8e2adfebdf4074afe3f5991070085"} Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.169905 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-htwzm" podStartSLOduration=125.169884246 podStartE2EDuration="2m5.169884246s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:19.14024562 +0000 UTC m=+146.930911572" watchObservedRunningTime="2026-02-19 19:21:19.169884246 +0000 UTC m=+146.960550188" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.204624 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.205170 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podStartSLOduration=125.205136399 podStartE2EDuration="2m5.205136399s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:19.20123069 +0000 UTC m=+146.991896622" watchObservedRunningTime="2026-02-19 19:21:19.205136399 +0000 UTC m=+146.995802341" Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.206819 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mc57j" podStartSLOduration=125.206809165 podStartE2EDuration="2m5.206809165s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:19.170835303 +0000 UTC m=+146.961501255" watchObservedRunningTime="2026-02-19 19:21:19.206809165 +0000 UTC m=+146.997475107" Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.215107 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.715063835 +0000 UTC m=+147.505729777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.309748 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.310230 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.810213617 +0000 UTC m=+147.600879559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.411271 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.411774 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:19.911745516 +0000 UTC m=+147.702411458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.513050 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.513547 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.013527313 +0000 UTC m=+147.804193265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.613792 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.614741 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.114719203 +0000 UTC m=+147.905385145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.716542 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.716972 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.216958482 +0000 UTC m=+148.007624424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.818046 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.818479 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.318453321 +0000 UTC m=+148.109119263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:19 crc kubenswrapper[4787]: I0219 19:21:19.920470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:19 crc kubenswrapper[4787]: E0219 19:21:19.920882 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.420866865 +0000 UTC m=+148.211532807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.021496 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.021622 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.521588002 +0000 UTC m=+148.312253944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.022042 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.022422 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.522413215 +0000 UTC m=+148.313079157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.037247 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:20 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:20 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:20 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.037325 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.122989 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.123139 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.623114331 +0000 UTC m=+148.413780273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.123730 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.124094 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.624084978 +0000 UTC m=+148.414750920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.171786 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" event={"ID":"290ac79a-db6b-4fbb-b0ee-e4b0a397a312","Type":"ContainerStarted","Data":"c7204e2c8a7d8a016024c8509c927ae0b5c9d2fb013b028fa69cc962676c5d0d"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.171845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" event={"ID":"290ac79a-db6b-4fbb-b0ee-e4b0a397a312","Type":"ContainerStarted","Data":"a0d9de4a28c148927232df45925bf84b6eec8861f4e8907b0e98d2d097ffb994"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.173225 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" event={"ID":"ae563ac8-197b-4860-a115-175a12fa1690","Type":"ContainerStarted","Data":"dce2ef1a4209954842f99b8361862d68804a12eda83521ee6eb21d75266033e7"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.176671 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" event={"ID":"6d87b05a-6cc4-4313-8993-ad5378bdc68f","Type":"ContainerStarted","Data":"bac18b7b5508bfe0db2eaa50c9ad9c49a067b7b95abc378a270d4221c1060a53"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.178984 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" event={"ID":"c36f7c9f-bb17-49f7-bce8-7136e7443278","Type":"ContainerStarted","Data":"d55b7273ec5b83cc1e7c8e3552f274535f9f25ae71b59102330ebf9c871c15cb"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.179034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" event={"ID":"c36f7c9f-bb17-49f7-bce8-7136e7443278","Type":"ContainerStarted","Data":"63939ef9954c542349a0d8602343905106749f5ee388820f569085ee4343839b"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.181485 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" event={"ID":"2854140a-273c-41e8-9a78-611994b05d26","Type":"ContainerStarted","Data":"7067b59a1a1d69f5b72ba013e403cbbc6405b69161ffb081985b35fe7c7bb7ff"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.183660 4787 generic.go:334] "Generic (PLEG): container finished" podID="b7424e23-a3c1-4e60-87c8-db2ad78ba2a9" containerID="6e29b882ba940f3768c12668125297bb96fc8705cca58c42c4255645ca210497" exitCode=0 Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.183718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" event={"ID":"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9","Type":"ContainerDied","Data":"6e29b882ba940f3768c12668125297bb96fc8705cca58c42c4255645ca210497"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.183750 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" event={"ID":"b7424e23-a3c1-4e60-87c8-db2ad78ba2a9","Type":"ContainerStarted","Data":"eb5da90fe800c31e41b917ceffa78244a4c4653462ad1b661aa5e789cd5c78d7"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.185642 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" event={"ID":"ac7ae820-3827-442c-83b4-aad43aa9e383","Type":"ContainerStarted","Data":"be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.185996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.187302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" event={"ID":"221a034a-d231-46b6-b0ea-624788b21fea","Type":"ContainerStarted","Data":"6abeb6273e2590ef002aeddbda2f15f2a7859dd88bef41941367482a534ab026"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.188089 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwwj8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.188136 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podUID="221a034a-d231-46b6-b0ea-624788b21fea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.188645 4787 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gcqj4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.188769 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.188889 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" event={"ID":"803c4ef1-5afd-477b-9833-87325d455383","Type":"ContainerStarted","Data":"fa0ff8a710c60060a0705bd90144c962757a3d43620fa4bc8374204164d2c8ac"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.190127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" event={"ID":"b0d95611-e321-46d4-ba78-b847021133c9","Type":"ContainerStarted","Data":"21850b0dc91b2907c8706d530d3730da0a372cfda316ac9a3b71c93b9fb6a731"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.190887 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.192818 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-shj95" event={"ID":"9ded0889-3bf6-4276-9af2-a7a81df383ea","Type":"ContainerStarted","Data":"367ad96c71508732c4bcdab659be0cf773a16a55702ca37e80c661fa04d59e48"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.193780 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.194067 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.203097 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gtnqp" podStartSLOduration=127.20307434 podStartE2EDuration="2m7.20307434s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.201583118 +0000 UTC m=+147.992249060" watchObservedRunningTime="2026-02-19 19:21:20.20307434 +0000 UTC m=+147.993740282" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.209413 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5c8rt" event={"ID":"7dc20e80-624f-45be-90ae-f611007a3ffb","Type":"ContainerStarted","Data":"98736c8c7d659fd8233d95a30b6c4cd37651a97852576aca8d3ade3da795bcdf"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.210007 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.219680 4787 generic.go:334] "Generic (PLEG): container finished" podID="a0538112-d98b-49ff-9618-654279d0ef7f" containerID="5180d5494cb0f6e97fd3f5d660a44e1cb3530e08a7ab681450bd487f93e8a708" exitCode=0 Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.219742 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" event={"ID":"a0538112-d98b-49ff-9618-654279d0ef7f","Type":"ContainerDied","Data":"5180d5494cb0f6e97fd3f5d660a44e1cb3530e08a7ab681450bd487f93e8a708"} Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.222304 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kpxw8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.222368 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.224723 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.225239 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.725219766 +0000 UTC m=+148.515885708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.245578 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" podStartSLOduration=127.245550093 podStartE2EDuration="2m7.245550093s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.24039592 +0000 UTC m=+148.031061882" watchObservedRunningTime="2026-02-19 19:21:20.245550093 +0000 UTC m=+148.036216035" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.276892 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-brdgc" podStartSLOduration=127.276858096 podStartE2EDuration="2m7.276858096s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.263785841 +0000 UTC m=+148.054451783" watchObservedRunningTime="2026-02-19 19:21:20.276858096 +0000 UTC m=+148.067524038" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.328269 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podStartSLOduration=126.328252438 podStartE2EDuration="2m6.328252438s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.326337425 +0000 UTC m=+148.117003357" watchObservedRunningTime="2026-02-19 19:21:20.328252438 +0000 UTC m=+148.118918380" Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.328704 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.82869096 +0000 UTC m=+148.619356902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.328297 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.344727 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7bpzk" podStartSLOduration=126.344698016 podStartE2EDuration="2m6.344698016s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.29926784 +0000 UTC m=+148.089933782" watchObservedRunningTime="2026-02-19 19:21:20.344698016 +0000 UTC m=+148.135363958" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.363872 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" podStartSLOduration=126.36384852 podStartE2EDuration="2m6.36384852s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.362792721 +0000 UTC m=+148.153458683" watchObservedRunningTime="2026-02-19 19:21:20.36384852 +0000 UTC m=+148.154514462" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.395134 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-szzvk" podStartSLOduration=127.395106851 podStartE2EDuration="2m7.395106851s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.393844146 +0000 UTC m=+148.184510088" watchObservedRunningTime="2026-02-19 19:21:20.395106851 +0000 UTC m=+148.185772803" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.433443 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8rgcw" podStartSLOduration=126.433403948 podStartE2EDuration="2m6.433403948s" podCreationTimestamp="2026-02-19 19:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.433327696 +0000 UTC m=+148.223993638" watchObservedRunningTime="2026-02-19 19:21:20.433403948 +0000 UTC m=+148.224069890" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.440592 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.441137 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:20.941109133 +0000 UTC m=+148.731775075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.544681 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.545401 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.045384589 +0000 UTC m=+148.836050531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.581758 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5c8rt" podStartSLOduration=9.581726972 podStartE2EDuration="9.581726972s" podCreationTimestamp="2026-02-19 19:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:20.580155258 +0000 UTC m=+148.370821200" watchObservedRunningTime="2026-02-19 19:21:20.581726972 +0000 UTC m=+148.372392914" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.648365 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.648839 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.148792701 +0000 UTC m=+148.939458643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.648987 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.649438 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.149421599 +0000 UTC m=+148.940087541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.750713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.751102 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.251079012 +0000 UTC m=+149.041744954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.852301 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.852763 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.352744845 +0000 UTC m=+149.143410787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.870648 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 19:21:20 crc kubenswrapper[4787]: I0219 19:21:20.957792 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:20 crc kubenswrapper[4787]: E0219 19:21:20.958746 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.458722668 +0000 UTC m=+149.249388610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.044511 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:21 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:21 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:21 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.044598 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.060690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.061222 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.561203424 +0000 UTC m=+149.351869366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.162664 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.162934 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.662888328 +0000 UTC m=+149.453554280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.163236 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.163710 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.66368943 +0000 UTC m=+149.454355372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.242571 4787 generic.go:334] "Generic (PLEG): container finished" podID="e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" containerID="f0b255258f985c70a9692a8f775373e24765d43a1f661523ae7d20850feb00ac" exitCode=0 Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.242691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" event={"ID":"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5","Type":"ContainerDied","Data":"f0b255258f985c70a9692a8f775373e24765d43a1f661523ae7d20850feb00ac"} Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.255268 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" event={"ID":"a0538112-d98b-49ff-9618-654279d0ef7f","Type":"ContainerStarted","Data":"cd6c4cfa0299869834f19b6b378bc364a124b35878f23bf87585c7bc118b5ba9"} Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.255337 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" event={"ID":"a0538112-d98b-49ff-9618-654279d0ef7f","Type":"ContainerStarted","Data":"d93bd9744dda5d8fa24727ce67505449ba0770b6e3ca66bca7ace54220211e84"} Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.257580 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kpxw8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.257648 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.263619 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.264185 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.264630 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.764615293 +0000 UTC m=+149.555281235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.269557 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.318273 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" podStartSLOduration=128.318240597 podStartE2EDuration="2m8.318240597s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:21.291232895 +0000 UTC m=+149.081898847" watchObservedRunningTime="2026-02-19 19:21:21.318240597 +0000 UTC m=+149.108906539" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.324362 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.367128 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.370702 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.870681129 +0000 UTC m=+149.661347071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.451255 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.469450 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.469734 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.969695788 +0000 UTC m=+149.760361730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.470014 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.470449 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:21.970426839 +0000 UTC m=+149.761092781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.571378 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.571662 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.071596767 +0000 UTC m=+149.862262739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.571744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.572227 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.072213794 +0000 UTC m=+149.862879736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.673209 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.673584 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.673657 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.673691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.673755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.674013 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.173990481 +0000 UTC m=+149.964656423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.674928 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.689954 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.698417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.700774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.775198 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.776030 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.275751446 +0000 UTC m=+150.066417388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.877124 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.877499 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.377472851 +0000 UTC m=+150.168138793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.904565 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.905334 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.907854 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.909571 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.912486 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.921073 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.924751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.941680 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.980416 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.980523 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:21 crc kubenswrapper[4787]: I0219 19:21:21.980661 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:21 crc kubenswrapper[4787]: E0219 19:21:21.980982 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.480961855 +0000 UTC m=+150.271627797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-js449" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.026787 4787 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.043292 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:22 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:22 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:22 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.043785 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.087005 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.087354 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.087493 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.090002 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:22 crc kubenswrapper[4787]: E0219 19:21:22.090140 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 19:21:22.590111247 +0000 UTC m=+150.380777189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.107727 4787 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T19:21:22.026824723Z","Handler":null,"Name":""} Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.123557 4787 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.123628 4787 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.141584 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.191813 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.199676 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.199764 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.225630 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.236926 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-js449\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.292562 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.324181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" event={"ID":"803c4ef1-5afd-477b-9833-87325d455383","Type":"ContainerStarted","Data":"101339838ade0698ded0de61cc2c12315b950e47522391c06e4ee8f0fbf51ce7"} Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.324226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" event={"ID":"803c4ef1-5afd-477b-9833-87325d455383","Type":"ContainerStarted","Data":"cf8d8e2563afc17692d4339682c9ba1e4b5d2e53bc7b3e59c6df8f7c4f71e072"} Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.337095 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z49wh"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.338211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.346485 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.355103 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z49wh"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.397729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccz7f\" (UniqueName: \"kubernetes.io/projected/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-kube-api-access-ccz7f\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.397825 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-utilities\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.398082 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-catalog-content\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.424124 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:21:22 crc kubenswrapper[4787]: W0219 19:21:22.455782 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c57160efb0f9f31852c84d8d78c38ab9d3094ad86c559d9a3872bd9f4c485dcf WatchSource:0}: Error finding container c57160efb0f9f31852c84d8d78c38ab9d3094ad86c559d9a3872bd9f4c485dcf: Status 404 returned error can't find the container with id c57160efb0f9f31852c84d8d78c38ab9d3094ad86c559d9a3872bd9f4c485dcf Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.489071 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.500567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-catalog-content\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.500695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-utilities\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.500716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccz7f\" (UniqueName: \"kubernetes.io/projected/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-kube-api-access-ccz7f\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.502198 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-catalog-content\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.502472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-utilities\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.530388 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccz7f\" (UniqueName: \"kubernetes.io/projected/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-kube-api-access-ccz7f\") pod \"community-operators-z49wh\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.543280 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlcfq"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.544900 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.547863 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.571600 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlcfq"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.611209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-catalog-content\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.611744 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mgqm\" (UniqueName: \"kubernetes.io/projected/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-kube-api-access-7mgqm\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.612488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-utilities\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.697104 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.713816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mgqm\" (UniqueName: \"kubernetes.io/projected/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-kube-api-access-7mgqm\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.713914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-utilities\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.713952 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-catalog-content\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.714327 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-catalog-content\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.719861 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-utilities\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.809153 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mgqm\" (UniqueName: \"kubernetes.io/projected/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-kube-api-access-7mgqm\") pod \"certified-operators-rlcfq\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.813996 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7cwt"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.817480 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.818440 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7cwt"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.944992 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmh5\" (UniqueName: \"kubernetes.io/projected/06e24106-2760-4782-bd63-0efd7e7834eb-kube-api-access-ttmh5\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.945540 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-catalog-content\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.945594 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-utilities\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.945820 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.983229 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.984425 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7b6c"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.987127 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7b6c"] Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.987287 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:22 crc kubenswrapper[4787]: I0219 19:21:22.989574 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.041265 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:23 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:23 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:23 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.041348 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058089 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgb4w\" (UniqueName: \"kubernetes.io/projected/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-kube-api-access-tgb4w\") pod \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058165 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-secret-volume\") pod \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058198 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-config-volume\") pod \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\" (UID: \"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5\") " Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058478 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmh5\" (UniqueName: \"kubernetes.io/projected/06e24106-2760-4782-bd63-0efd7e7834eb-kube-api-access-ttmh5\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058530 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24mg\" (UniqueName: \"kubernetes.io/projected/f09dfcd3-ca07-460c-a45c-aed742cc66d2-kube-api-access-l24mg\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-utilities\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058664 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-catalog-content\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-catalog-content\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.058780 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-utilities\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.062486 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-catalog-content\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.077190 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-utilities\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.078824 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-kube-api-access-tgb4w" (OuterVolumeSpecName: "kube-api-access-tgb4w") pod "e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" (UID: "e35db86a-2b97-4f8b-bb87-6ab2b004d5e5"). InnerVolumeSpecName "kube-api-access-tgb4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.078909 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" (UID: "e35db86a-2b97-4f8b-bb87-6ab2b004d5e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.080815 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" (UID: "e35db86a-2b97-4f8b-bb87-6ab2b004d5e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.091063 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmh5\" (UniqueName: \"kubernetes.io/projected/06e24106-2760-4782-bd63-0efd7e7834eb-kube-api-access-ttmh5\") pod \"community-operators-x7cwt\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.096286 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.101050 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-js449"] Feb 19 19:21:23 crc kubenswrapper[4787]: W0219 19:21:23.117205 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f3e4936_b445_48b8_b4cd_cc5141b34d0e.slice/crio-21ce90d385b60e6cb47d10b8234515cb8be2143f7e573cfe58753daec3dd38a7 WatchSource:0}: Error finding container 21ce90d385b60e6cb47d10b8234515cb8be2143f7e573cfe58753daec3dd38a7: Status 404 returned error can't find the container with id 21ce90d385b60e6cb47d10b8234515cb8be2143f7e573cfe58753daec3dd38a7 Feb 19 19:21:23 crc kubenswrapper[4787]: W0219 19:21:23.125982 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48098b79_2446_4f86_a42a_e6f12ab783d5.slice/crio-818469df8677679046e2de087d8dfb56c23cf3e10c0b5a6edd72adac08039696 WatchSource:0}: Error finding container 818469df8677679046e2de087d8dfb56c23cf3e10c0b5a6edd72adac08039696: Status 404 returned error can't find the container with id 818469df8677679046e2de087d8dfb56c23cf3e10c0b5a6edd72adac08039696 Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.149565 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.161370 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-utilities\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.161442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-catalog-content\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.161580 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24mg\" (UniqueName: \"kubernetes.io/projected/f09dfcd3-ca07-460c-a45c-aed742cc66d2-kube-api-access-l24mg\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.161799 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgb4w\" (UniqueName: \"kubernetes.io/projected/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-kube-api-access-tgb4w\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.161818 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.161827 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.162248 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-utilities\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.162289 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-catalog-content\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.184794 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24mg\" (UniqueName: \"kubernetes.io/projected/f09dfcd3-ca07-460c-a45c-aed742cc66d2-kube-api-access-l24mg\") pod \"certified-operators-d7b6c\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.186626 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z49wh"] Feb 19 19:21:23 crc kubenswrapper[4787]: W0219 19:21:23.215382 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1439a73_21d8_4a7f_9e3e_7a5bc58cbbf4.slice/crio-e2fd0094be9a55a9655a74d6e40117fbc06eb52d91497302a7913e54473ee62d WatchSource:0}: Error finding container e2fd0094be9a55a9655a74d6e40117fbc06eb52d91497302a7913e54473ee62d: Status 404 returned error can't find the container with id e2fd0094be9a55a9655a74d6e40117fbc06eb52d91497302a7913e54473ee62d Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.256007 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlcfq"] Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.322726 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.323193 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.322862 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.323277 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.333186 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.352534 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f3e4936-b445-48b8-b4cd-cc5141b34d0e","Type":"ContainerStarted","Data":"21ce90d385b60e6cb47d10b8234515cb8be2143f7e573cfe58753daec3dd38a7"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.370026 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7a20634e729942fa1ae60a86eff6634b9b88a739336dd212a3dfc759f309998f"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.370099 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cfe1db757c3bed9fa2f0200fae7845da53ab15e7576215c99d9b8194e5148087"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.370496 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.404214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ba6a669730f07286465c98035d7245b4f02e047da7b48be0aac5d900129c9523"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.404277 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"033324072f5b907b46feaca9e4760f049b26e147e4cecf8488edb807807c07dd"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.437821 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z49wh" event={"ID":"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4","Type":"ContainerStarted","Data":"e2fd0094be9a55a9655a74d6e40117fbc06eb52d91497302a7913e54473ee62d"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.450496 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-js449" event={"ID":"48098b79-2446-4f86-a42a-e6f12ab783d5","Type":"ContainerStarted","Data":"818469df8677679046e2de087d8dfb56c23cf3e10c0b5a6edd72adac08039696"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.451950 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.453508 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.454622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx" event={"ID":"e35db86a-2b97-4f8b-bb87-6ab2b004d5e5","Type":"ContainerDied","Data":"ece777e149c382ae7e88f919717e69b804077c035717b4ad669b874e622b27df"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.454706 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece777e149c382ae7e88f919717e69b804077c035717b4ad669b874e622b27df" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.476517 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cb77ef47c8b58c2714ecacb3ba49ccedd9acf4a20adf6602414043cf661839b4"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.476573 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c57160efb0f9f31852c84d8d78c38ab9d3094ad86c559d9a3872bd9f4c485dcf"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.504032 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-js449" podStartSLOduration=130.50400219 podStartE2EDuration="2m10.50400219s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:23.476442992 +0000 UTC m=+151.267108954" watchObservedRunningTime="2026-02-19 19:21:23.50400219 +0000 UTC m=+151.294668132" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.513600 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlcfq" event={"ID":"90aaa1d4-625b-4592-88b2-aad8f37a5dd8","Type":"ContainerStarted","Data":"dfc5659c6af21cb52cbf625693b0eb393218c56a6917337f660af7a05024b30c"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.519714 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7cwt"] Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.542806 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" event={"ID":"803c4ef1-5afd-477b-9833-87325d455383","Type":"ContainerStarted","Data":"f33a3b63a39ea58da23874851e810475165e77e348656a666c6b77f90f17b886"} Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.583881 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6xptg" podStartSLOduration=12.583849915 podStartE2EDuration="12.583849915s" podCreationTimestamp="2026-02-19 19:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:23.576194172 +0000 UTC m=+151.366860114" watchObservedRunningTime="2026-02-19 19:21:23.583849915 +0000 UTC m=+151.374515857" Feb 19 19:21:23 crc kubenswrapper[4787]: I0219 19:21:23.748254 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7b6c"] Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.037524 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:24 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:24 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:24 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.038037 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.258488 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.258689 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.262244 4787 patch_prober.go:28] interesting pod/console-f9d7485db-h92w2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.262310 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h92w2" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.513808 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4msqr"] Feb 19 19:21:24 crc kubenswrapper[4787]: E0219 19:21:24.514080 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" containerName="collect-profiles" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.514095 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" containerName="collect-profiles" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.514247 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" containerName="collect-profiles" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.515677 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.522387 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.533842 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4msqr"] Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.550341 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-js449" event={"ID":"48098b79-2446-4f86-a42a-e6f12ab783d5","Type":"ContainerStarted","Data":"710a73bd79d037d03dec54a0ce7333c78e832bd8348950c079083c9ae5eb37dd"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.552958 4787 generic.go:334] "Generic (PLEG): container finished" podID="06e24106-2760-4782-bd63-0efd7e7834eb" containerID="5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246" exitCode=0 Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.553067 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7cwt" event={"ID":"06e24106-2760-4782-bd63-0efd7e7834eb","Type":"ContainerDied","Data":"5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.553123 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7cwt" event={"ID":"06e24106-2760-4782-bd63-0efd7e7834eb","Type":"ContainerStarted","Data":"8b93964daf4be05fce1f04069bf25bcf8933e55f0d9313e7e90a8035230df941"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.555457 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.556540 4787 generic.go:334] "Generic (PLEG): container finished" podID="3f3e4936-b445-48b8-b4cd-cc5141b34d0e" containerID="90dd86b4737705c096974ab00ebc8522a762162cfce3061fe5e7b0631aad84b1" exitCode=0 Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.557079 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f3e4936-b445-48b8-b4cd-cc5141b34d0e","Type":"ContainerDied","Data":"90dd86b4737705c096974ab00ebc8522a762162cfce3061fe5e7b0631aad84b1"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.560000 4787 generic.go:334] "Generic (PLEG): container finished" podID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerID="c0fdefe4d5ca3f61d437074a597218a71366533fccd3da1ebfd4e6959ec1cde5" exitCode=0 Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.560069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlcfq" event={"ID":"90aaa1d4-625b-4592-88b2-aad8f37a5dd8","Type":"ContainerDied","Data":"c0fdefe4d5ca3f61d437074a597218a71366533fccd3da1ebfd4e6959ec1cde5"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.563980 4787 generic.go:334] "Generic (PLEG): container finished" podID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerID="890f844f4280b6c87e1fefc34981797def0ea135ce4a73a216442cc8ca73d2dc" exitCode=0 Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.564104 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerDied","Data":"890f844f4280b6c87e1fefc34981797def0ea135ce4a73a216442cc8ca73d2dc"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.564148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerStarted","Data":"ed228721fe8b360cc65d509ff297e5af3dab0507889e77719566e58a3cd6f151"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.573803 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerID="98c18fb35f039b8c050fbe02c26c83b6513e20936caaeae995547ef660c63bf9" exitCode=0 Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.574048 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z49wh" event={"ID":"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4","Type":"ContainerDied","Data":"98c18fb35f039b8c050fbe02c26c83b6513e20936caaeae995547ef660c63bf9"} Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.583093 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-utilities\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.583186 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-catalog-content\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.583254 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtn45\" (UniqueName: \"kubernetes.io/projected/51311b0a-7d74-4ee1-983c-3a48a521ded9-kube-api-access-mtn45\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.587970 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.588011 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.600266 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.684918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-utilities\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.685009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-catalog-content\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.685063 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtn45\" (UniqueName: \"kubernetes.io/projected/51311b0a-7d74-4ee1-983c-3a48a521ded9-kube-api-access-mtn45\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.687074 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-catalog-content\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.687550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-utilities\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.712120 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtn45\" (UniqueName: \"kubernetes.io/projected/51311b0a-7d74-4ee1-983c-3a48a521ded9-kube-api-access-mtn45\") pod \"redhat-marketplace-4msqr\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.831249 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.920294 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xb6"] Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.921618 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.938072 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xb6"] Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.989693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-utilities\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.989775 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5hh4\" (UniqueName: \"kubernetes.io/projected/46208d7d-262a-4b06-9581-152c8d77b33d-kube-api-access-c5hh4\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:24 crc kubenswrapper[4787]: I0219 19:21:24.989844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-catalog-content\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.014335 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.014907 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.037974 4787 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qz8z6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]log ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]etcd ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/max-in-flight-filter ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 19:21:25 crc kubenswrapper[4787]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-startinformers ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 19:21:25 crc kubenswrapper[4787]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 19:21:25 crc kubenswrapper[4787]: livez check failed Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.038066 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" podUID="a0538112-d98b-49ff-9618-654279d0ef7f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.038377 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.048001 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:25 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:25 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:25 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.048071 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.091159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.092121 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-utilities\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.092300 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5hh4\" (UniqueName: \"kubernetes.io/projected/46208d7d-262a-4b06-9581-152c8d77b33d-kube-api-access-c5hh4\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.092345 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-catalog-content\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.093384 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-utilities\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.093463 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-catalog-content\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.159222 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5hh4\" (UniqueName: \"kubernetes.io/projected/46208d7d-262a-4b06-9581-152c8d77b33d-kube-api-access-c5hh4\") pod \"redhat-marketplace-m5xb6\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.240139 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.242051 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4msqr"] Feb 19 19:21:25 crc kubenswrapper[4787]: W0219 19:21:25.284808 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51311b0a_7d74_4ee1_983c_3a48a521ded9.slice/crio-61741d98ef09f39ba547ef328c9be080e360d3a3fea6f6305df0c17ab9a7ede8 WatchSource:0}: Error finding container 61741d98ef09f39ba547ef328c9be080e360d3a3fea6f6305df0c17ab9a7ede8: Status 404 returned error can't find the container with id 61741d98ef09f39ba547ef328c9be080e360d3a3fea6f6305df0c17ab9a7ede8 Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.524241 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nnvxf"] Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.525958 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: W0219 19:21:25.529093 4787 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: secrets "redhat-operators-dockercfg-ct8rh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 19 19:21:25 crc kubenswrapper[4787]: E0219 19:21:25.529159 4787 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"redhat-operators-dockercfg-ct8rh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.538701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xb6"] Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.558571 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nnvxf"] Feb 19 19:21:25 crc kubenswrapper[4787]: W0219 19:21:25.565784 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46208d7d_262a_4b06_9581_152c8d77b33d.slice/crio-3d0e89e72db374330950e6b370d251608d5d5008f86fed072566345d371772cc WatchSource:0}: Error finding container 3d0e89e72db374330950e6b370d251608d5d5008f86fed072566345d371772cc: Status 404 returned error can't find the container with id 3d0e89e72db374330950e6b370d251608d5d5008f86fed072566345d371772cc Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.593716 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerStarted","Data":"3d0e89e72db374330950e6b370d251608d5d5008f86fed072566345d371772cc"} Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.596422 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4msqr" event={"ID":"51311b0a-7d74-4ee1-983c-3a48a521ded9","Type":"ContainerStarted","Data":"61741d98ef09f39ba547ef328c9be080e360d3a3fea6f6305df0c17ab9a7ede8"} Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.603411 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.706817 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-utilities\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.706921 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-catalog-content\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.707062 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hhc\" (UniqueName: \"kubernetes.io/projected/bb979100-3b5a-45af-8985-e80f54babd63-kube-api-access-24hhc\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.808626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-utilities\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.808733 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-catalog-content\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.808799 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hhc\" (UniqueName: \"kubernetes.io/projected/bb979100-3b5a-45af-8985-e80f54babd63-kube-api-access-24hhc\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.810252 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-utilities\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.810499 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-catalog-content\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.838455 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hhc\" (UniqueName: \"kubernetes.io/projected/bb979100-3b5a-45af-8985-e80f54babd63-kube-api-access-24hhc\") pod \"redhat-operators-nnvxf\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.893825 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.927594 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2hprv"] Feb 19 19:21:25 crc kubenswrapper[4787]: E0219 19:21:25.927950 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3e4936-b445-48b8-b4cd-cc5141b34d0e" containerName="pruner" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.927982 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3e4936-b445-48b8-b4cd-cc5141b34d0e" containerName="pruner" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.928127 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3e4936-b445-48b8-b4cd-cc5141b34d0e" containerName="pruner" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.929096 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:25 crc kubenswrapper[4787]: I0219 19:21:25.950557 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2hprv"] Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.011729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kube-api-access\") pod \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.011822 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kubelet-dir\") pod \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\" (UID: \"3f3e4936-b445-48b8-b4cd-cc5141b34d0e\") " Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.011915 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f3e4936-b445-48b8-b4cd-cc5141b34d0e" (UID: "3f3e4936-b445-48b8-b4cd-cc5141b34d0e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.012359 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.017146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f3e4936-b445-48b8-b4cd-cc5141b34d0e" (UID: "3f3e4936-b445-48b8-b4cd-cc5141b34d0e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.037850 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:26 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:26 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:26 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.037917 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.113177 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-utilities\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.113268 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-catalog-content\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.113348 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cs2h\" (UniqueName: \"kubernetes.io/projected/e5cfc668-6927-40e4-a665-710df8a8ad86-kube-api-access-8cs2h\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.113444 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f3e4936-b445-48b8-b4cd-cc5141b34d0e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.215868 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cs2h\" (UniqueName: \"kubernetes.io/projected/e5cfc668-6927-40e4-a665-710df8a8ad86-kube-api-access-8cs2h\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.215967 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-utilities\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.216002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-catalog-content\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.216558 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-catalog-content\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.217278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-utilities\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.234713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cs2h\" (UniqueName: \"kubernetes.io/projected/e5cfc668-6927-40e4-a665-710df8a8ad86-kube-api-access-8cs2h\") pod \"redhat-operators-2hprv\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.614930 4787 generic.go:334] "Generic (PLEG): container finished" podID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerID="6b6a3c57bcf423aa4c6d59acc09d391a949a54b1b6020f2193c0692f7056b6d8" exitCode=0 Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.615082 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4msqr" event={"ID":"51311b0a-7d74-4ee1-983c-3a48a521ded9","Type":"ContainerDied","Data":"6b6a3c57bcf423aa4c6d59acc09d391a949a54b1b6020f2193c0692f7056b6d8"} Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.622555 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3f3e4936-b445-48b8-b4cd-cc5141b34d0e","Type":"ContainerDied","Data":"21ce90d385b60e6cb47d10b8234515cb8be2143f7e573cfe58753daec3dd38a7"} Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.622618 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ce90d385b60e6cb47d10b8234515cb8be2143f7e573cfe58753daec3dd38a7" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.622640 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.625021 4787 generic.go:334] "Generic (PLEG): container finished" podID="46208d7d-262a-4b06-9581-152c8d77b33d" containerID="6955fbe28680e365697e02731cd0eb2a98d310697d6cca2df905f9743e381a79" exitCode=0 Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.625105 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerDied","Data":"6955fbe28680e365697e02731cd0eb2a98d310697d6cca2df905f9743e381a79"} Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.842515 4787 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-operators-nnvxf" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.842633 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.907094 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 19:21:26 crc kubenswrapper[4787]: I0219 19:21:26.914751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.036584 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:27 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:27 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:27 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.037383 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.124494 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5c8rt" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.266432 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nnvxf"] Feb 19 19:21:27 crc kubenswrapper[4787]: W0219 19:21:27.294059 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb979100_3b5a_45af_8985_e80f54babd63.slice/crio-65d6d4a4d86ee5f4a5bc9bcdf62a10b0a2a1f9acaad443d77434aa9f36065469 WatchSource:0}: Error finding container 65d6d4a4d86ee5f4a5bc9bcdf62a10b0a2a1f9acaad443d77434aa9f36065469: Status 404 returned error can't find the container with id 65d6d4a4d86ee5f4a5bc9bcdf62a10b0a2a1f9acaad443d77434aa9f36065469 Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.490072 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2hprv"] Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.519507 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.520307 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.523743 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.523979 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.525263 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 19:21:27 crc kubenswrapper[4787]: W0219 19:21:27.541673 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5cfc668_6927_40e4_a665_710df8a8ad86.slice/crio-a647f7196e41c2456c9816abfcac0af6baecc66206204a6dacd7e96e176294c2 WatchSource:0}: Error finding container a647f7196e41c2456c9816abfcac0af6baecc66206204a6dacd7e96e176294c2: Status 404 returned error can't find the container with id a647f7196e41c2456c9816abfcac0af6baecc66206204a6dacd7e96e176294c2 Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.644365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.644482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.658750 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerStarted","Data":"a647f7196e41c2456c9816abfcac0af6baecc66206204a6dacd7e96e176294c2"} Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.668074 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb979100-3b5a-45af-8985-e80f54babd63" containerID="631311276c26f07db13412dc025987be0f53559edc593cf5dc9462979f34b296" exitCode=0 Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.668269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerDied","Data":"631311276c26f07db13412dc025987be0f53559edc593cf5dc9462979f34b296"} Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.668464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerStarted","Data":"65d6d4a4d86ee5f4a5bc9bcdf62a10b0a2a1f9acaad443d77434aa9f36065469"} Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.746557 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.747961 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.748103 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.773546 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:27 crc kubenswrapper[4787]: I0219 19:21:27.866728 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:28 crc kubenswrapper[4787]: I0219 19:21:28.037726 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:28 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:28 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:28 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:28 crc kubenswrapper[4787]: I0219 19:21:28.037798 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:28 crc kubenswrapper[4787]: I0219 19:21:28.427528 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 19:21:28 crc kubenswrapper[4787]: W0219 19:21:28.445482 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod87ce12bf_d35b_4163_ba69_bf76cd612ee4.slice/crio-bd556a2b4197bda80c8dcca125764c712268c22d3e652267b30a53b5d1c26286 WatchSource:0}: Error finding container bd556a2b4197bda80c8dcca125764c712268c22d3e652267b30a53b5d1c26286: Status 404 returned error can't find the container with id bd556a2b4197bda80c8dcca125764c712268c22d3e652267b30a53b5d1c26286 Feb 19 19:21:28 crc kubenswrapper[4787]: I0219 19:21:28.687799 4787 generic.go:334] "Generic (PLEG): container finished" podID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerID="4dfc9c002da031279ef834fefdb2d8ef5365649781105fb6b1fc05c428259c1e" exitCode=0 Feb 19 19:21:28 crc kubenswrapper[4787]: I0219 19:21:28.687975 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerDied","Data":"4dfc9c002da031279ef834fefdb2d8ef5365649781105fb6b1fc05c428259c1e"} Feb 19 19:21:28 crc kubenswrapper[4787]: I0219 19:21:28.691366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"87ce12bf-d35b-4163-ba69-bf76cd612ee4","Type":"ContainerStarted","Data":"bd556a2b4197bda80c8dcca125764c712268c22d3e652267b30a53b5d1c26286"} Feb 19 19:21:29 crc kubenswrapper[4787]: I0219 19:21:29.036852 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:29 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:29 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:29 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:29 crc kubenswrapper[4787]: I0219 19:21:29.037346 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:29 crc kubenswrapper[4787]: I0219 19:21:29.722701 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"87ce12bf-d35b-4163-ba69-bf76cd612ee4","Type":"ContainerStarted","Data":"1fee79f57d4be252ee6f00510be789796e7039b671174f7b3c00c5e29dbc7a67"} Feb 19 19:21:29 crc kubenswrapper[4787]: I0219 19:21:29.807569 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.807546977 podStartE2EDuration="2.807546977s" podCreationTimestamp="2026-02-19 19:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:21:29.807092604 +0000 UTC m=+157.597758546" watchObservedRunningTime="2026-02-19 19:21:29.807546977 +0000 UTC m=+157.598212919" Feb 19 19:21:30 crc kubenswrapper[4787]: I0219 19:21:30.020887 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:30 crc kubenswrapper[4787]: I0219 19:21:30.027846 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" Feb 19 19:21:30 crc kubenswrapper[4787]: I0219 19:21:30.037766 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:30 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:30 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:30 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:30 crc kubenswrapper[4787]: I0219 19:21:30.037854 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:30 crc kubenswrapper[4787]: I0219 19:21:30.736382 4787 generic.go:334] "Generic (PLEG): container finished" podID="87ce12bf-d35b-4163-ba69-bf76cd612ee4" containerID="1fee79f57d4be252ee6f00510be789796e7039b671174f7b3c00c5e29dbc7a67" exitCode=0 Feb 19 19:21:30 crc kubenswrapper[4787]: I0219 19:21:30.736499 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"87ce12bf-d35b-4163-ba69-bf76cd612ee4","Type":"ContainerDied","Data":"1fee79f57d4be252ee6f00510be789796e7039b671174f7b3c00c5e29dbc7a67"} Feb 19 19:21:31 crc kubenswrapper[4787]: I0219 19:21:31.036300 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:31 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:31 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:31 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:31 crc kubenswrapper[4787]: I0219 19:21:31.036383 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:32 crc kubenswrapper[4787]: I0219 19:21:32.039223 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:32 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:32 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:32 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:32 crc kubenswrapper[4787]: I0219 19:21:32.040237 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:33 crc kubenswrapper[4787]: I0219 19:21:33.036560 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 19:21:33 crc kubenswrapper[4787]: [-]has-synced failed: reason withheld Feb 19 19:21:33 crc kubenswrapper[4787]: [+]process-running ok Feb 19 19:21:33 crc kubenswrapper[4787]: healthz check failed Feb 19 19:21:33 crc kubenswrapper[4787]: I0219 19:21:33.037183 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 19:21:33 crc kubenswrapper[4787]: I0219 19:21:33.331152 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6cngl" Feb 19 19:21:34 crc kubenswrapper[4787]: I0219 19:21:34.037114 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:34 crc kubenswrapper[4787]: I0219 19:21:34.040050 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-29dzb" Feb 19 19:21:34 crc kubenswrapper[4787]: I0219 19:21:34.258887 4787 patch_prober.go:28] interesting pod/console-f9d7485db-h92w2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 19:21:34 crc kubenswrapper[4787]: I0219 19:21:34.259052 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h92w2" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 19:21:37 crc kubenswrapper[4787]: I0219 19:21:37.764778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:37 crc kubenswrapper[4787]: I0219 19:21:37.790741 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f25fce-8c35-4786-94f3-93854459f32a-metrics-certs\") pod \"network-metrics-daemon-cv5f6\" (UID: \"56f25fce-8c35-4786-94f3-93854459f32a\") " pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:37 crc kubenswrapper[4787]: I0219 19:21:37.833445 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cv5f6" Feb 19 19:21:39 crc kubenswrapper[4787]: I0219 19:21:39.263813 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:21:39 crc kubenswrapper[4787]: I0219 19:21:39.264331 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.833190 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"87ce12bf-d35b-4163-ba69-bf76cd612ee4","Type":"ContainerDied","Data":"bd556a2b4197bda80c8dcca125764c712268c22d3e652267b30a53b5d1c26286"} Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.833240 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd556a2b4197bda80c8dcca125764c712268c22d3e652267b30a53b5d1c26286" Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.848077 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.936767 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kube-api-access\") pod \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.936855 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kubelet-dir\") pod \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\" (UID: \"87ce12bf-d35b-4163-ba69-bf76cd612ee4\") " Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.938061 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "87ce12bf-d35b-4163-ba69-bf76cd612ee4" (UID: "87ce12bf-d35b-4163-ba69-bf76cd612ee4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:21:41 crc kubenswrapper[4787]: I0219 19:21:41.944632 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "87ce12bf-d35b-4163-ba69-bf76cd612ee4" (UID: "87ce12bf-d35b-4163-ba69-bf76cd612ee4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:21:42 crc kubenswrapper[4787]: I0219 19:21:42.038814 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:42 crc kubenswrapper[4787]: I0219 19:21:42.038863 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87ce12bf-d35b-4163-ba69-bf76cd612ee4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:21:42 crc kubenswrapper[4787]: I0219 19:21:42.496053 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:21:42 crc kubenswrapper[4787]: I0219 19:21:42.842896 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 19:21:44 crc kubenswrapper[4787]: I0219 19:21:44.262483 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:44 crc kubenswrapper[4787]: I0219 19:21:44.266795 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:21:54 crc kubenswrapper[4787]: I0219 19:21:54.255231 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" Feb 19 19:21:57 crc kubenswrapper[4787]: E0219 19:21:57.824390 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 19:21:57 crc kubenswrapper[4787]: E0219 19:21:57.825050 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8cs2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2hprv_openshift-marketplace(e5cfc668-6927-40e4-a665-710df8a8ad86): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:21:57 crc kubenswrapper[4787]: E0219 19:21:57.826309 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2hprv" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" Feb 19 19:22:00 crc kubenswrapper[4787]: E0219 19:22:00.438386 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2hprv" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" Feb 19 19:22:00 crc kubenswrapper[4787]: E0219 19:22:00.506499 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 19:22:00 crc kubenswrapper[4787]: E0219 19:22:00.506695 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mgqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rlcfq_openshift-marketplace(90aaa1d4-625b-4592-88b2-aad8f37a5dd8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:22:00 crc kubenswrapper[4787]: E0219 19:22:00.508161 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rlcfq" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.735216 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rlcfq" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.842375 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.843090 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttmh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x7cwt_openshift-marketplace(06e24106-2760-4782-bd63-0efd7e7834eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.845052 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x7cwt" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.863729 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.863913 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l24mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d7b6c_openshift-marketplace(f09dfcd3-ca07-460c-a45c-aed742cc66d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.865840 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d7b6c" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.892773 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.892964 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtn45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4msqr_openshift-marketplace(51311b0a-7d74-4ee1-983c-3a48a521ded9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.894281 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4msqr" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.918883 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.919167 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5hh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m5xb6_openshift-marketplace(46208d7d-262a-4b06-9581-152c8d77b33d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.921684 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m5xb6" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" Feb 19 19:22:01 crc kubenswrapper[4787]: I0219 19:22:01.931860 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.950314 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d7b6c" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.953719 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m5xb6" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.958313 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x7cwt" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.958367 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4msqr" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.960514 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.960685 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24hhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nnvxf_openshift-marketplace(bb979100-3b5a-45af-8985-e80f54babd63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:22:01 crc kubenswrapper[4787]: E0219 19:22:01.962865 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nnvxf" podUID="bb979100-3b5a-45af-8985-e80f54babd63" Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.194638 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cv5f6"] Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.956646 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" event={"ID":"56f25fce-8c35-4786-94f3-93854459f32a","Type":"ContainerStarted","Data":"0e9925364debc21de4dd28fe3b80d29fa4e19830ab29e9e5cea8e0fb1b5e6996"} Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.956951 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" event={"ID":"56f25fce-8c35-4786-94f3-93854459f32a","Type":"ContainerStarted","Data":"a8b42de991c68caa63800191c2aac50a970026ba68b10c185d17a21a5447d3d5"} Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.956969 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cv5f6" event={"ID":"56f25fce-8c35-4786-94f3-93854459f32a","Type":"ContainerStarted","Data":"0e39d7c6f99339a281374d036572330d6862000cfabb546aec219509f597ac47"} Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.958897 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerID="d86b64d4498670113ea2df07f298abc714afc5da98125221efe85b1c58ab991b" exitCode=0 Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.959801 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z49wh" event={"ID":"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4","Type":"ContainerDied","Data":"d86b64d4498670113ea2df07f298abc714afc5da98125221efe85b1c58ab991b"} Feb 19 19:22:02 crc kubenswrapper[4787]: E0219 19:22:02.961601 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nnvxf" podUID="bb979100-3b5a-45af-8985-e80f54babd63" Feb 19 19:22:02 crc kubenswrapper[4787]: I0219 19:22:02.979344 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cv5f6" podStartSLOduration=169.97932026 podStartE2EDuration="2m49.97932026s" podCreationTimestamp="2026-02-19 19:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:22:02.973688313 +0000 UTC m=+190.764354355" watchObservedRunningTime="2026-02-19 19:22:02.97932026 +0000 UTC m=+190.769986202" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.318217 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 19:22:03 crc kubenswrapper[4787]: E0219 19:22:03.319429 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ce12bf-d35b-4163-ba69-bf76cd612ee4" containerName="pruner" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.319444 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ce12bf-d35b-4163-ba69-bf76cd612ee4" containerName="pruner" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.319568 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ce12bf-d35b-4163-ba69-bf76cd612ee4" containerName="pruner" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.320048 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.325933 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.326404 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.337462 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.352943 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b935bf4-a46c-4b43-9abd-73c34344b4df-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.353343 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b935bf4-a46c-4b43-9abd-73c34344b4df-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.454588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b935bf4-a46c-4b43-9abd-73c34344b4df-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.454725 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b935bf4-a46c-4b43-9abd-73c34344b4df-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.454818 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b935bf4-a46c-4b43-9abd-73c34344b4df-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.483712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b935bf4-a46c-4b43-9abd-73c34344b4df-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.641119 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.968196 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z49wh" event={"ID":"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4","Type":"ContainerStarted","Data":"156a9e4aba0cb1ef65dee8cb9213059c714014631e47fd3656dccd812e2ab802"} Feb 19 19:22:03 crc kubenswrapper[4787]: I0219 19:22:03.991036 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z49wh" podStartSLOduration=3.17723917 podStartE2EDuration="41.991014634s" podCreationTimestamp="2026-02-19 19:21:22 +0000 UTC" firstStartedPulling="2026-02-19 19:21:24.58155604 +0000 UTC m=+152.372221982" lastFinishedPulling="2026-02-19 19:22:03.395331504 +0000 UTC m=+191.185997446" observedRunningTime="2026-02-19 19:22:03.989461081 +0000 UTC m=+191.780127023" watchObservedRunningTime="2026-02-19 19:22:03.991014634 +0000 UTC m=+191.781680576" Feb 19 19:22:04 crc kubenswrapper[4787]: I0219 19:22:04.059140 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 19:22:04 crc kubenswrapper[4787]: W0219 19:22:04.070357 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0b935bf4_a46c_4b43_9abd_73c34344b4df.slice/crio-836b31f6c66c7c253f97121c374798b1e0462521a2d3da7f5b73e00dd137698e WatchSource:0}: Error finding container 836b31f6c66c7c253f97121c374798b1e0462521a2d3da7f5b73e00dd137698e: Status 404 returned error can't find the container with id 836b31f6c66c7c253f97121c374798b1e0462521a2d3da7f5b73e00dd137698e Feb 19 19:22:04 crc kubenswrapper[4787]: I0219 19:22:04.973750 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b935bf4-a46c-4b43-9abd-73c34344b4df","Type":"ContainerStarted","Data":"d7f20980b5765a5e318e6b5f03b68ff5c93f0c31cf5cf22c2e680a979b69ebac"} Feb 19 19:22:04 crc kubenswrapper[4787]: I0219 19:22:04.974459 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b935bf4-a46c-4b43-9abd-73c34344b4df","Type":"ContainerStarted","Data":"836b31f6c66c7c253f97121c374798b1e0462521a2d3da7f5b73e00dd137698e"} Feb 19 19:22:04 crc kubenswrapper[4787]: I0219 19:22:04.993061 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.993037688 podStartE2EDuration="1.993037688s" podCreationTimestamp="2026-02-19 19:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:22:04.991453604 +0000 UTC m=+192.782119546" watchObservedRunningTime="2026-02-19 19:22:04.993037688 +0000 UTC m=+192.783703640" Feb 19 19:22:05 crc kubenswrapper[4787]: I0219 19:22:05.983590 4787 generic.go:334] "Generic (PLEG): container finished" podID="0b935bf4-a46c-4b43-9abd-73c34344b4df" containerID="d7f20980b5765a5e318e6b5f03b68ff5c93f0c31cf5cf22c2e680a979b69ebac" exitCode=0 Feb 19 19:22:05 crc kubenswrapper[4787]: I0219 19:22:05.983725 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b935bf4-a46c-4b43-9abd-73c34344b4df","Type":"ContainerDied","Data":"d7f20980b5765a5e318e6b5f03b68ff5c93f0c31cf5cf22c2e680a979b69ebac"} Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.379886 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.428131 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b935bf4-a46c-4b43-9abd-73c34344b4df-kube-api-access\") pod \"0b935bf4-a46c-4b43-9abd-73c34344b4df\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.428379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b935bf4-a46c-4b43-9abd-73c34344b4df-kubelet-dir\") pod \"0b935bf4-a46c-4b43-9abd-73c34344b4df\" (UID: \"0b935bf4-a46c-4b43-9abd-73c34344b4df\") " Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.428691 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b935bf4-a46c-4b43-9abd-73c34344b4df-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b935bf4-a46c-4b43-9abd-73c34344b4df" (UID: "0b935bf4-a46c-4b43-9abd-73c34344b4df"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.439894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b935bf4-a46c-4b43-9abd-73c34344b4df-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b935bf4-a46c-4b43-9abd-73c34344b4df" (UID: "0b935bf4-a46c-4b43-9abd-73c34344b4df"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.530424 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b935bf4-a46c-4b43-9abd-73c34344b4df-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.530464 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b935bf4-a46c-4b43-9abd-73c34344b4df-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.995761 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b935bf4-a46c-4b43-9abd-73c34344b4df","Type":"ContainerDied","Data":"836b31f6c66c7c253f97121c374798b1e0462521a2d3da7f5b73e00dd137698e"} Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.995810 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="836b31f6c66c7c253f97121c374798b1e0462521a2d3da7f5b73e00dd137698e" Feb 19 19:22:07 crc kubenswrapper[4787]: I0219 19:22:07.995835 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.107782 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 19:22:08 crc kubenswrapper[4787]: E0219 19:22:08.108126 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b935bf4-a46c-4b43-9abd-73c34344b4df" containerName="pruner" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.108152 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b935bf4-a46c-4b43-9abd-73c34344b4df" containerName="pruner" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.108287 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b935bf4-a46c-4b43-9abd-73c34344b4df" containerName="pruner" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.108858 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.112392 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.114371 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.120112 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.138062 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.138113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-var-lock\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.138139 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770b4095-ecd6-4b8e-8af0-b19beaba951c-kube-api-access\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.239245 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.239313 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-var-lock\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.239347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770b4095-ecd6-4b8e-8af0-b19beaba951c-kube-api-access\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.239378 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.239503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-var-lock\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.258523 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770b4095-ecd6-4b8e-8af0-b19beaba951c-kube-api-access\") pod \"installer-9-crc\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.448628 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:08 crc kubenswrapper[4787]: I0219 19:22:08.857682 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 19:22:09 crc kubenswrapper[4787]: I0219 19:22:09.004622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"770b4095-ecd6-4b8e-8af0-b19beaba951c","Type":"ContainerStarted","Data":"463d19de53cb10df8bb97c1a44798c7ff033dfa9a95371d609c1fb6c0753dc26"} Feb 19 19:22:09 crc kubenswrapper[4787]: I0219 19:22:09.263860 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:22:09 crc kubenswrapper[4787]: I0219 19:22:09.264646 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:22:10 crc kubenswrapper[4787]: I0219 19:22:10.012879 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"770b4095-ecd6-4b8e-8af0-b19beaba951c","Type":"ContainerStarted","Data":"2896d02a1deeb6e3c977319ae659846d643915c0b895fe7caebc8526f522e15c"} Feb 19 19:22:10 crc kubenswrapper[4787]: I0219 19:22:10.035689 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.035657279 podStartE2EDuration="2.035657279s" podCreationTimestamp="2026-02-19 19:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:22:10.02834365 +0000 UTC m=+197.819009592" watchObservedRunningTime="2026-02-19 19:22:10.035657279 +0000 UTC m=+197.826323221" Feb 19 19:22:12 crc kubenswrapper[4787]: I0219 19:22:12.698890 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:22:12 crc kubenswrapper[4787]: I0219 19:22:12.704232 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:22:12 crc kubenswrapper[4787]: I0219 19:22:12.843644 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:22:13 crc kubenswrapper[4787]: I0219 19:22:13.069181 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:22:15 crc kubenswrapper[4787]: I0219 19:22:15.055452 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerStarted","Data":"5a7cd167c72b8526942a4abd204e387f3c39c79ccf27a5158df56e86de515383"} Feb 19 19:22:16 crc kubenswrapper[4787]: I0219 19:22:16.065098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerStarted","Data":"05d84e57dcc91ecac3362c751c1c9c8725f872a83fa3a4ebee5804008488b20b"} Feb 19 19:22:16 crc kubenswrapper[4787]: I0219 19:22:16.066769 4787 generic.go:334] "Generic (PLEG): container finished" podID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerID="5a7cd167c72b8526942a4abd204e387f3c39c79ccf27a5158df56e86de515383" exitCode=0 Feb 19 19:22:16 crc kubenswrapper[4787]: I0219 19:22:16.066847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerDied","Data":"5a7cd167c72b8526942a4abd204e387f3c39c79ccf27a5158df56e86de515383"} Feb 19 19:22:16 crc kubenswrapper[4787]: I0219 19:22:16.075178 4787 generic.go:334] "Generic (PLEG): container finished" podID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerID="2a48cc61b0220e739063332495bbfe3cfd93b97bbc6810eec202d0cd92e496da" exitCode=0 Feb 19 19:22:16 crc kubenswrapper[4787]: I0219 19:22:16.075297 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlcfq" event={"ID":"90aaa1d4-625b-4592-88b2-aad8f37a5dd8","Type":"ContainerDied","Data":"2a48cc61b0220e739063332495bbfe3cfd93b97bbc6810eec202d0cd92e496da"} Feb 19 19:22:16 crc kubenswrapper[4787]: I0219 19:22:16.078236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerStarted","Data":"aa54ede12b28aa457a585500dbad701034794d7be3d92860b2c681405be2451d"} Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.088820 4787 generic.go:334] "Generic (PLEG): container finished" podID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerID="aa54ede12b28aa457a585500dbad701034794d7be3d92860b2c681405be2451d" exitCode=0 Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.088923 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerDied","Data":"aa54ede12b28aa457a585500dbad701034794d7be3d92860b2c681405be2451d"} Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.094393 4787 generic.go:334] "Generic (PLEG): container finished" podID="46208d7d-262a-4b06-9581-152c8d77b33d" containerID="05d84e57dcc91ecac3362c751c1c9c8725f872a83fa3a4ebee5804008488b20b" exitCode=0 Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.094532 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerDied","Data":"05d84e57dcc91ecac3362c751c1c9c8725f872a83fa3a4ebee5804008488b20b"} Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.098100 4787 generic.go:334] "Generic (PLEG): container finished" podID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerID="fae27ab3b6947994971c323ffbd3a7ec803bfbe78cd5e355e3edb1a9022bc15d" exitCode=0 Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.098201 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4msqr" event={"ID":"51311b0a-7d74-4ee1-983c-3a48a521ded9","Type":"ContainerDied","Data":"fae27ab3b6947994971c323ffbd3a7ec803bfbe78cd5e355e3edb1a9022bc15d"} Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.117828 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerStarted","Data":"eff2c575e595ac0ca8479419db33a9690598c7501dfa3df92409b88511da1719"} Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.120313 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlcfq" event={"ID":"90aaa1d4-625b-4592-88b2-aad8f37a5dd8","Type":"ContainerStarted","Data":"8f170bfb021aafed08c8c5bddea7912d0f4ad75b828942b6671a9d25c31e6420"} Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.177873 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlcfq" podStartSLOduration=3.158804936 podStartE2EDuration="55.177846153s" podCreationTimestamp="2026-02-19 19:21:22 +0000 UTC" firstStartedPulling="2026-02-19 19:21:24.562654423 +0000 UTC m=+152.353320365" lastFinishedPulling="2026-02-19 19:22:16.58169565 +0000 UTC m=+204.372361582" observedRunningTime="2026-02-19 19:22:17.175666551 +0000 UTC m=+204.966332493" watchObservedRunningTime="2026-02-19 19:22:17.177846153 +0000 UTC m=+204.968512095" Feb 19 19:22:17 crc kubenswrapper[4787]: I0219 19:22:17.205728 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2hprv" podStartSLOduration=4.247708556 podStartE2EDuration="52.20568923s" podCreationTimestamp="2026-02-19 19:21:25 +0000 UTC" firstStartedPulling="2026-02-19 19:21:28.689962323 +0000 UTC m=+156.480628265" lastFinishedPulling="2026-02-19 19:22:16.647942997 +0000 UTC m=+204.438608939" observedRunningTime="2026-02-19 19:22:17.198450683 +0000 UTC m=+204.989116635" watchObservedRunningTime="2026-02-19 19:22:17.20568923 +0000 UTC m=+204.996355172" Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.129895 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerStarted","Data":"74467cb8c9c015dc1e5dc70a83d114cac44346c44b7b31fd48751b78b12136e3"} Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.133317 4787 generic.go:334] "Generic (PLEG): container finished" podID="06e24106-2760-4782-bd63-0efd7e7834eb" containerID="445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b" exitCode=0 Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.133494 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7cwt" event={"ID":"06e24106-2760-4782-bd63-0efd7e7834eb","Type":"ContainerDied","Data":"445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b"} Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.136348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerStarted","Data":"ded1b537946b7b3fa4a60f0dbdeffa50b60e99285fcc63d5f1c6edc325fc81dd"} Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.139358 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4msqr" event={"ID":"51311b0a-7d74-4ee1-983c-3a48a521ded9","Type":"ContainerStarted","Data":"3e5bbf75ec506f0f45ff610e4a30398069cf420d0f30fb909211762cd07273ad"} Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.151519 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7b6c" podStartSLOduration=3.217040736 podStartE2EDuration="56.151494817s" podCreationTimestamp="2026-02-19 19:21:22 +0000 UTC" firstStartedPulling="2026-02-19 19:21:24.572462756 +0000 UTC m=+152.363128698" lastFinishedPulling="2026-02-19 19:22:17.506916837 +0000 UTC m=+205.297582779" observedRunningTime="2026-02-19 19:22:18.150814038 +0000 UTC m=+205.941479980" watchObservedRunningTime="2026-02-19 19:22:18.151494817 +0000 UTC m=+205.942160759" Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.171563 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4msqr" podStartSLOduration=3.258437383 podStartE2EDuration="54.171536701s" podCreationTimestamp="2026-02-19 19:21:24 +0000 UTC" firstStartedPulling="2026-02-19 19:21:26.616950092 +0000 UTC m=+154.407616034" lastFinishedPulling="2026-02-19 19:22:17.53004941 +0000 UTC m=+205.320715352" observedRunningTime="2026-02-19 19:22:18.169074341 +0000 UTC m=+205.959740283" watchObservedRunningTime="2026-02-19 19:22:18.171536701 +0000 UTC m=+205.962202653" Feb 19 19:22:18 crc kubenswrapper[4787]: I0219 19:22:18.194672 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5xb6" podStartSLOduration=3.232515863 podStartE2EDuration="54.194646083s" podCreationTimestamp="2026-02-19 19:21:24 +0000 UTC" firstStartedPulling="2026-02-19 19:21:26.627042683 +0000 UTC m=+154.417708625" lastFinishedPulling="2026-02-19 19:22:17.589172903 +0000 UTC m=+205.379838845" observedRunningTime="2026-02-19 19:22:18.190220457 +0000 UTC m=+205.980886399" watchObservedRunningTime="2026-02-19 19:22:18.194646083 +0000 UTC m=+205.985312035" Feb 19 19:22:19 crc kubenswrapper[4787]: I0219 19:22:19.147791 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerStarted","Data":"0a8e6eadd5be38a8a3b50e8253a2e1121f26f6df8bba459623929c1b43ae1b60"} Feb 19 19:22:19 crc kubenswrapper[4787]: I0219 19:22:19.150123 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7cwt" event={"ID":"06e24106-2760-4782-bd63-0efd7e7834eb","Type":"ContainerStarted","Data":"2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98"} Feb 19 19:22:19 crc kubenswrapper[4787]: I0219 19:22:19.181748 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7cwt" podStartSLOduration=3.145197591 podStartE2EDuration="57.181721741s" podCreationTimestamp="2026-02-19 19:21:22 +0000 UTC" firstStartedPulling="2026-02-19 19:21:24.555057301 +0000 UTC m=+152.345723243" lastFinishedPulling="2026-02-19 19:22:18.591581451 +0000 UTC m=+206.382247393" observedRunningTime="2026-02-19 19:22:19.178931441 +0000 UTC m=+206.969597383" watchObservedRunningTime="2026-02-19 19:22:19.181721741 +0000 UTC m=+206.972387683" Feb 19 19:22:20 crc kubenswrapper[4787]: I0219 19:22:20.173743 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb979100-3b5a-45af-8985-e80f54babd63" containerID="0a8e6eadd5be38a8a3b50e8253a2e1121f26f6df8bba459623929c1b43ae1b60" exitCode=0 Feb 19 19:22:20 crc kubenswrapper[4787]: I0219 19:22:20.173782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerDied","Data":"0a8e6eadd5be38a8a3b50e8253a2e1121f26f6df8bba459623929c1b43ae1b60"} Feb 19 19:22:22 crc kubenswrapper[4787]: I0219 19:22:22.946867 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:22:22 crc kubenswrapper[4787]: I0219 19:22:22.947322 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:22:22 crc kubenswrapper[4787]: I0219 19:22:22.993414 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.150169 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.150861 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.193546 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.234189 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.234261 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.337649 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.338041 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:22:23 crc kubenswrapper[4787]: I0219 19:22:23.376232 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:22:24 crc kubenswrapper[4787]: I0219 19:22:24.201298 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerStarted","Data":"f0e5a808dccb6982462a5a08145cf1fe70f11282edb16629acc02fa06767efce"} Feb 19 19:22:24 crc kubenswrapper[4787]: I0219 19:22:24.225513 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nnvxf" podStartSLOduration=3.120306292 podStartE2EDuration="59.225489679s" podCreationTimestamp="2026-02-19 19:21:25 +0000 UTC" firstStartedPulling="2026-02-19 19:21:27.673110595 +0000 UTC m=+155.463776537" lastFinishedPulling="2026-02-19 19:22:23.778293992 +0000 UTC m=+211.568959924" observedRunningTime="2026-02-19 19:22:24.221383152 +0000 UTC m=+212.012049094" watchObservedRunningTime="2026-02-19 19:22:24.225489679 +0000 UTC m=+212.016155621" Feb 19 19:22:24 crc kubenswrapper[4787]: I0219 19:22:24.242321 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:22:24 crc kubenswrapper[4787]: I0219 19:22:24.831674 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:22:24 crc kubenswrapper[4787]: I0219 19:22:24.832306 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:22:24 crc kubenswrapper[4787]: I0219 19:22:24.905015 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.075345 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7cwt"] Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.207750 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7cwt" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="registry-server" containerID="cri-o://2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98" gracePeriod=2 Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.241776 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.241843 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.271545 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.277428 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7b6c"] Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.302576 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.580760 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.634797 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttmh5\" (UniqueName: \"kubernetes.io/projected/06e24106-2760-4782-bd63-0efd7e7834eb-kube-api-access-ttmh5\") pod \"06e24106-2760-4782-bd63-0efd7e7834eb\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.634857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-utilities\") pod \"06e24106-2760-4782-bd63-0efd7e7834eb\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.634929 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-catalog-content\") pod \"06e24106-2760-4782-bd63-0efd7e7834eb\" (UID: \"06e24106-2760-4782-bd63-0efd7e7834eb\") " Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.635763 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-utilities" (OuterVolumeSpecName: "utilities") pod "06e24106-2760-4782-bd63-0efd7e7834eb" (UID: "06e24106-2760-4782-bd63-0efd7e7834eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.642873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e24106-2760-4782-bd63-0efd7e7834eb-kube-api-access-ttmh5" (OuterVolumeSpecName: "kube-api-access-ttmh5") pod "06e24106-2760-4782-bd63-0efd7e7834eb" (UID: "06e24106-2760-4782-bd63-0efd7e7834eb"). InnerVolumeSpecName "kube-api-access-ttmh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.697166 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06e24106-2760-4782-bd63-0efd7e7834eb" (UID: "06e24106-2760-4782-bd63-0efd7e7834eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.737384 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttmh5\" (UniqueName: \"kubernetes.io/projected/06e24106-2760-4782-bd63-0efd7e7834eb-kube-api-access-ttmh5\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.737441 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:25 crc kubenswrapper[4787]: I0219 19:22:25.737451 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06e24106-2760-4782-bd63-0efd7e7834eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.217503 4787 generic.go:334] "Generic (PLEG): container finished" podID="06e24106-2760-4782-bd63-0efd7e7834eb" containerID="2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98" exitCode=0 Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.218916 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7cwt" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.220801 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7cwt" event={"ID":"06e24106-2760-4782-bd63-0efd7e7834eb","Type":"ContainerDied","Data":"2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98"} Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.220920 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7cwt" event={"ID":"06e24106-2760-4782-bd63-0efd7e7834eb","Type":"ContainerDied","Data":"8b93964daf4be05fce1f04069bf25bcf8933e55f0d9313e7e90a8035230df941"} Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.220958 4787 scope.go:117] "RemoveContainer" containerID="2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.239221 4787 scope.go:117] "RemoveContainer" containerID="445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.270410 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7cwt"] Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.270480 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7cwt"] Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.272678 4787 scope.go:117] "RemoveContainer" containerID="5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.274811 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.301027 4787 scope.go:117] "RemoveContainer" containerID="2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98" Feb 19 19:22:26 crc kubenswrapper[4787]: E0219 19:22:26.301588 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98\": container with ID starting with 2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98 not found: ID does not exist" containerID="2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.301654 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98"} err="failed to get container status \"2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98\": rpc error: code = NotFound desc = could not find container \"2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98\": container with ID starting with 2101041409a37b76ebd1b5c4a1642377f25e371f7c7249eb34637d8268311d98 not found: ID does not exist" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.301728 4787 scope.go:117] "RemoveContainer" containerID="445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b" Feb 19 19:22:26 crc kubenswrapper[4787]: E0219 19:22:26.302240 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b\": container with ID starting with 445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b not found: ID does not exist" containerID="445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.302300 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b"} err="failed to get container status \"445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b\": rpc error: code = NotFound desc = could not find container \"445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b\": container with ID starting with 445b550ac40f17582f7d0f1a8774da03588e7719d34e795fcc707fdf20fdcc9b not found: ID does not exist" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.302337 4787 scope.go:117] "RemoveContainer" containerID="5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246" Feb 19 19:22:26 crc kubenswrapper[4787]: E0219 19:22:26.302690 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246\": container with ID starting with 5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246 not found: ID does not exist" containerID="5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.302737 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246"} err="failed to get container status \"5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246\": rpc error: code = NotFound desc = could not find container \"5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246\": container with ID starting with 5868985f14dc1c240e1552112ff317867da990c6d52cc4888ac64e91b8f2c246 not found: ID does not exist" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.843528 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.843625 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.902551 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" path="/var/lib/kubelet/pods/06e24106-2760-4782-bd63-0efd7e7834eb/volumes" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.915953 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.916008 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:22:26 crc kubenswrapper[4787]: I0219 19:22:26.975653 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:22:27 crc kubenswrapper[4787]: I0219 19:22:27.225199 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d7b6c" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="registry-server" containerID="cri-o://74467cb8c9c015dc1e5dc70a83d114cac44346c44b7b31fd48751b78b12136e3" gracePeriod=2 Feb 19 19:22:27 crc kubenswrapper[4787]: I0219 19:22:27.268422 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:22:27 crc kubenswrapper[4787]: I0219 19:22:27.676584 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xb6"] Feb 19 19:22:27 crc kubenswrapper[4787]: I0219 19:22:27.886015 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nnvxf" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="registry-server" probeResult="failure" output=< Feb 19 19:22:27 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:22:27 crc kubenswrapper[4787]: > Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.231360 4787 generic.go:334] "Generic (PLEG): container finished" podID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerID="74467cb8c9c015dc1e5dc70a83d114cac44346c44b7b31fd48751b78b12136e3" exitCode=0 Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.231452 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerDied","Data":"74467cb8c9c015dc1e5dc70a83d114cac44346c44b7b31fd48751b78b12136e3"} Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.231649 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5xb6" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="registry-server" containerID="cri-o://ded1b537946b7b3fa4a60f0dbdeffa50b60e99285fcc63d5f1c6edc325fc81dd" gracePeriod=2 Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.846487 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.895046 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-utilities\") pod \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.895138 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l24mg\" (UniqueName: \"kubernetes.io/projected/f09dfcd3-ca07-460c-a45c-aed742cc66d2-kube-api-access-l24mg\") pod \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.895172 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-catalog-content\") pod \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\" (UID: \"f09dfcd3-ca07-460c-a45c-aed742cc66d2\") " Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.895948 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-utilities" (OuterVolumeSpecName: "utilities") pod "f09dfcd3-ca07-460c-a45c-aed742cc66d2" (UID: "f09dfcd3-ca07-460c-a45c-aed742cc66d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.905153 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09dfcd3-ca07-460c-a45c-aed742cc66d2-kube-api-access-l24mg" (OuterVolumeSpecName: "kube-api-access-l24mg") pod "f09dfcd3-ca07-460c-a45c-aed742cc66d2" (UID: "f09dfcd3-ca07-460c-a45c-aed742cc66d2"). InnerVolumeSpecName "kube-api-access-l24mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.949811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f09dfcd3-ca07-460c-a45c-aed742cc66d2" (UID: "f09dfcd3-ca07-460c-a45c-aed742cc66d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.996761 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.996813 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l24mg\" (UniqueName: \"kubernetes.io/projected/f09dfcd3-ca07-460c-a45c-aed742cc66d2-kube-api-access-l24mg\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:28 crc kubenswrapper[4787]: I0219 19:22:28.996960 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09dfcd3-ca07-460c-a45c-aed742cc66d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.248437 4787 generic.go:334] "Generic (PLEG): container finished" podID="46208d7d-262a-4b06-9581-152c8d77b33d" containerID="ded1b537946b7b3fa4a60f0dbdeffa50b60e99285fcc63d5f1c6edc325fc81dd" exitCode=0 Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.248558 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerDied","Data":"ded1b537946b7b3fa4a60f0dbdeffa50b60e99285fcc63d5f1c6edc325fc81dd"} Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.256048 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7b6c" event={"ID":"f09dfcd3-ca07-460c-a45c-aed742cc66d2","Type":"ContainerDied","Data":"ed228721fe8b360cc65d509ff297e5af3dab0507889e77719566e58a3cd6f151"} Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.256129 4787 scope.go:117] "RemoveContainer" containerID="74467cb8c9c015dc1e5dc70a83d114cac44346c44b7b31fd48751b78b12136e3" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.256372 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7b6c" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.292338 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7b6c"] Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.292403 4787 scope.go:117] "RemoveContainer" containerID="aa54ede12b28aa457a585500dbad701034794d7be3d92860b2c681405be2451d" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.296622 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d7b6c"] Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.316438 4787 scope.go:117] "RemoveContainer" containerID="890f844f4280b6c87e1fefc34981797def0ea135ce4a73a216442cc8ca73d2dc" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.397369 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.503768 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-utilities\") pod \"46208d7d-262a-4b06-9581-152c8d77b33d\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.503954 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5hh4\" (UniqueName: \"kubernetes.io/projected/46208d7d-262a-4b06-9581-152c8d77b33d-kube-api-access-c5hh4\") pod \"46208d7d-262a-4b06-9581-152c8d77b33d\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.504003 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-catalog-content\") pod \"46208d7d-262a-4b06-9581-152c8d77b33d\" (UID: \"46208d7d-262a-4b06-9581-152c8d77b33d\") " Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.504724 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-utilities" (OuterVolumeSpecName: "utilities") pod "46208d7d-262a-4b06-9581-152c8d77b33d" (UID: "46208d7d-262a-4b06-9581-152c8d77b33d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.507267 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46208d7d-262a-4b06-9581-152c8d77b33d-kube-api-access-c5hh4" (OuterVolumeSpecName: "kube-api-access-c5hh4") pod "46208d7d-262a-4b06-9581-152c8d77b33d" (UID: "46208d7d-262a-4b06-9581-152c8d77b33d"). InnerVolumeSpecName "kube-api-access-c5hh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.528022 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46208d7d-262a-4b06-9581-152c8d77b33d" (UID: "46208d7d-262a-4b06-9581-152c8d77b33d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.606157 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.606203 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5hh4\" (UniqueName: \"kubernetes.io/projected/46208d7d-262a-4b06-9581-152c8d77b33d-kube-api-access-c5hh4\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:29 crc kubenswrapper[4787]: I0219 19:22:29.606215 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46208d7d-262a-4b06-9581-152c8d77b33d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.266983 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5xb6" event={"ID":"46208d7d-262a-4b06-9581-152c8d77b33d","Type":"ContainerDied","Data":"3d0e89e72db374330950e6b370d251608d5d5008f86fed072566345d371772cc"} Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.267059 4787 scope.go:117] "RemoveContainer" containerID="ded1b537946b7b3fa4a60f0dbdeffa50b60e99285fcc63d5f1c6edc325fc81dd" Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.267078 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5xb6" Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.288635 4787 scope.go:117] "RemoveContainer" containerID="05d84e57dcc91ecac3362c751c1c9c8725f872a83fa3a4ebee5804008488b20b" Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.298953 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xb6"] Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.304675 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5xb6"] Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.312248 4787 scope.go:117] "RemoveContainer" containerID="6955fbe28680e365697e02731cd0eb2a98d310697d6cca2df905f9743e381a79" Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.900101 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" path="/var/lib/kubelet/pods/46208d7d-262a-4b06-9581-152c8d77b33d/volumes" Feb 19 19:22:30 crc kubenswrapper[4787]: I0219 19:22:30.900805 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" path="/var/lib/kubelet/pods/f09dfcd3-ca07-460c-a45c-aed742cc66d2/volumes" Feb 19 19:22:31 crc kubenswrapper[4787]: I0219 19:22:31.474893 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2hprv"] Feb 19 19:22:31 crc kubenswrapper[4787]: I0219 19:22:31.475181 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2hprv" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="registry-server" containerID="cri-o://eff2c575e595ac0ca8479419db33a9690598c7501dfa3df92409b88511da1719" gracePeriod=2 Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.283571 4787 generic.go:334] "Generic (PLEG): container finished" podID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerID="eff2c575e595ac0ca8479419db33a9690598c7501dfa3df92409b88511da1719" exitCode=0 Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.283639 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerDied","Data":"eff2c575e595ac0ca8479419db33a9690598c7501dfa3df92409b88511da1719"} Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.400673 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.455950 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-utilities\") pod \"e5cfc668-6927-40e4-a665-710df8a8ad86\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.456061 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cs2h\" (UniqueName: \"kubernetes.io/projected/e5cfc668-6927-40e4-a665-710df8a8ad86-kube-api-access-8cs2h\") pod \"e5cfc668-6927-40e4-a665-710df8a8ad86\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.456276 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-catalog-content\") pod \"e5cfc668-6927-40e4-a665-710df8a8ad86\" (UID: \"e5cfc668-6927-40e4-a665-710df8a8ad86\") " Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.457358 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-utilities" (OuterVolumeSpecName: "utilities") pod "e5cfc668-6927-40e4-a665-710df8a8ad86" (UID: "e5cfc668-6927-40e4-a665-710df8a8ad86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.463330 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5cfc668-6927-40e4-a665-710df8a8ad86-kube-api-access-8cs2h" (OuterVolumeSpecName: "kube-api-access-8cs2h") pod "e5cfc668-6927-40e4-a665-710df8a8ad86" (UID: "e5cfc668-6927-40e4-a665-710df8a8ad86"). InnerVolumeSpecName "kube-api-access-8cs2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.558728 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.558778 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cs2h\" (UniqueName: \"kubernetes.io/projected/e5cfc668-6927-40e4-a665-710df8a8ad86-kube-api-access-8cs2h\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.593813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5cfc668-6927-40e4-a665-710df8a8ad86" (UID: "e5cfc668-6927-40e4-a665-710df8a8ad86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:22:32 crc kubenswrapper[4787]: I0219 19:22:32.660201 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5cfc668-6927-40e4-a665-710df8a8ad86-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.291659 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hprv" event={"ID":"e5cfc668-6927-40e4-a665-710df8a8ad86","Type":"ContainerDied","Data":"a647f7196e41c2456c9816abfcac0af6baecc66206204a6dacd7e96e176294c2"} Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.292079 4787 scope.go:117] "RemoveContainer" containerID="eff2c575e595ac0ca8479419db33a9690598c7501dfa3df92409b88511da1719" Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.292322 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hprv" Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.308714 4787 scope.go:117] "RemoveContainer" containerID="5a7cd167c72b8526942a4abd204e387f3c39c79ccf27a5158df56e86de515383" Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.312645 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2hprv"] Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.316633 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2hprv"] Feb 19 19:22:33 crc kubenswrapper[4787]: I0219 19:22:33.335538 4787 scope.go:117] "RemoveContainer" containerID="4dfc9c002da031279ef834fefdb2d8ef5365649781105fb6b1fc05c428259c1e" Feb 19 19:22:34 crc kubenswrapper[4787]: I0219 19:22:34.456866 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gcqj4"] Feb 19 19:22:34 crc kubenswrapper[4787]: I0219 19:22:34.898156 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" path="/var/lib/kubelet/pods/e5cfc668-6927-40e4-a665-710df8a8ad86/volumes" Feb 19 19:22:36 crc kubenswrapper[4787]: I0219 19:22:36.885537 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:22:36 crc kubenswrapper[4787]: I0219 19:22:36.928248 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:22:39 crc kubenswrapper[4787]: I0219 19:22:39.264562 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:22:39 crc kubenswrapper[4787]: I0219 19:22:39.264687 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:22:39 crc kubenswrapper[4787]: I0219 19:22:39.264771 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:22:39 crc kubenswrapper[4787]: I0219 19:22:39.265443 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:22:39 crc kubenswrapper[4787]: I0219 19:22:39.265532 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4" gracePeriod=600 Feb 19 19:22:40 crc kubenswrapper[4787]: I0219 19:22:40.334539 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4" exitCode=0 Feb 19 19:22:40 crc kubenswrapper[4787]: I0219 19:22:40.334644 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4"} Feb 19 19:22:40 crc kubenswrapper[4787]: I0219 19:22:40.335186 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"142a5c3ff149fad1ffea5f20dee87392581ffa09a68fc5862a058508f6c30cc2"} Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.052271 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053168 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053183 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053195 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053202 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053219 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053227 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053235 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053243 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053255 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053265 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053280 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053287 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053303 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053309 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053318 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053325 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053334 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053344 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="extract-utilities" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053360 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053368 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053378 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053384 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.053392 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053399 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="extract-content" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053532 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e24106-2760-4782-bd63-0efd7e7834eb" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053549 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5cfc668-6927-40e4-a665-710df8a8ad86" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053569 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="46208d7d-262a-4b06-9581-152c8d77b33d" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053578 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09dfcd3-ca07-460c-a45c-aed742cc66d2" containerName="registry-server" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.053992 4787 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.054193 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.054282 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752" gracePeriod=15 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.054434 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde" gracePeriod=15 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.054385 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4" gracePeriod=15 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.054476 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1" gracePeriod=15 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.054516 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9" gracePeriod=15 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.055869 4787 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056207 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056231 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056251 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056265 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056281 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056293 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056322 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056334 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056350 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056364 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056393 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056407 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056425 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056437 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056603 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056653 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056671 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056688 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056705 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056725 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 19:22:47 crc kubenswrapper[4787]: E0219 19:22:47.056889 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.056907 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.057182 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.174706 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175189 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175233 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175279 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175322 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175376 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175474 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.175525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.276708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.276784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.276848 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.276940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.276964 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277013 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.276994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277036 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277113 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277121 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277153 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277186 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277216 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277264 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277256 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.277302 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.381295 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.382743 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.383841 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4" exitCode=0 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.383896 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde" exitCode=0 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.383953 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1" exitCode=0 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.383966 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9" exitCode=2 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.383964 4787 scope.go:117] "RemoveContainer" containerID="7b4bed2e69e7cb0c91a57fdb6c0682938c2cb044ce4c02ee09fc2324d3c539b9" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.385455 4787 generic.go:334] "Generic (PLEG): container finished" podID="770b4095-ecd6-4b8e-8af0-b19beaba951c" containerID="2896d02a1deeb6e3c977319ae659846d643915c0b895fe7caebc8526f522e15c" exitCode=0 Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.385498 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"770b4095-ecd6-4b8e-8af0-b19beaba951c","Type":"ContainerDied","Data":"2896d02a1deeb6e3c977319ae659846d643915c0b895fe7caebc8526f522e15c"} Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.386712 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:47 crc kubenswrapper[4787]: I0219 19:22:47.387189 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.397812 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.659141 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.659977 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.660427 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.703223 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-var-lock\") pod \"770b4095-ecd6-4b8e-8af0-b19beaba951c\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.703280 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-kubelet-dir\") pod \"770b4095-ecd6-4b8e-8af0-b19beaba951c\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.703331 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770b4095-ecd6-4b8e-8af0-b19beaba951c-kube-api-access\") pod \"770b4095-ecd6-4b8e-8af0-b19beaba951c\" (UID: \"770b4095-ecd6-4b8e-8af0-b19beaba951c\") " Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.703366 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-var-lock" (OuterVolumeSpecName: "var-lock") pod "770b4095-ecd6-4b8e-8af0-b19beaba951c" (UID: "770b4095-ecd6-4b8e-8af0-b19beaba951c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.703455 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "770b4095-ecd6-4b8e-8af0-b19beaba951c" (UID: "770b4095-ecd6-4b8e-8af0-b19beaba951c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.703525 4787 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.710191 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770b4095-ecd6-4b8e-8af0-b19beaba951c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "770b4095-ecd6-4b8e-8af0-b19beaba951c" (UID: "770b4095-ecd6-4b8e-8af0-b19beaba951c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.805038 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/770b4095-ecd6-4b8e-8af0-b19beaba951c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:48 crc kubenswrapper[4787]: I0219 19:22:48.805082 4787 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/770b4095-ecd6-4b8e-8af0-b19beaba951c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.407843 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.410380 4787 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752" exitCode=0 Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.410435 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a08a7ecaeaf39f6271e17b593d9ace92075494e4d582854cffbf0c8334fc27e" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.412583 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"770b4095-ecd6-4b8e-8af0-b19beaba951c","Type":"ContainerDied","Data":"463d19de53cb10df8bb97c1a44798c7ff033dfa9a95371d609c1fb6c0753dc26"} Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.412681 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463d19de53cb10df8bb97c1a44798c7ff033dfa9a95371d609c1fb6c0753dc26" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.412687 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.423818 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.427457 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.428167 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.428535 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.428827 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517061 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517127 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517223 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517215 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517300 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517343 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517788 4787 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517815 4787 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:49 crc kubenswrapper[4787]: I0219 19:22:49.517826 4787 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:50 crc kubenswrapper[4787]: I0219 19:22:50.419342 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:50 crc kubenswrapper[4787]: I0219 19:22:50.449343 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:50 crc kubenswrapper[4787]: I0219 19:22:50.449713 4787 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:50 crc kubenswrapper[4787]: I0219 19:22:50.898790 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 19:22:52 crc kubenswrapper[4787]: E0219 19:22:52.103644 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:52 crc kubenswrapper[4787]: I0219 19:22:52.104301 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:52 crc kubenswrapper[4787]: W0219 19:22:52.132709 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-908ef263b5964354fb372abebcd4e236ab7f31935cbe2bdf29f8efd271755e3d WatchSource:0}: Error finding container 908ef263b5964354fb372abebcd4e236ab7f31935cbe2bdf29f8efd271755e3d: Status 404 returned error can't find the container with id 908ef263b5964354fb372abebcd4e236ab7f31935cbe2bdf29f8efd271755e3d Feb 19 19:22:52 crc kubenswrapper[4787]: E0219 19:22:52.140243 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895bc36d5b0dd8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 19:22:52.13955214 +0000 UTC m=+239.930218082,LastTimestamp:2026-02-19 19:22:52.13955214 +0000 UTC m=+239.930218082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 19:22:52 crc kubenswrapper[4787]: I0219 19:22:52.434797 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750"} Feb 19 19:22:52 crc kubenswrapper[4787]: I0219 19:22:52.435235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"908ef263b5964354fb372abebcd4e236ab7f31935cbe2bdf29f8efd271755e3d"} Feb 19 19:22:52 crc kubenswrapper[4787]: I0219 19:22:52.896498 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:53 crc kubenswrapper[4787]: E0219 19:22:53.440469 4787 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:22:53 crc kubenswrapper[4787]: I0219 19:22:53.440960 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:53 crc kubenswrapper[4787]: E0219 19:22:53.460831 4787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895bc36d5b0dd8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 19:22:52.13955214 +0000 UTC m=+239.930218082,LastTimestamp:2026-02-19 19:22:52.13955214 +0000 UTC m=+239.930218082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.196330 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.196861 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.197260 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.197682 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.198255 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: I0219 19:22:54.198414 4787 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.198968 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.401362 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.803124 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.843669 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:22:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:22:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:22:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T19:22:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.844229 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.844701 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.845317 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.845879 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:54 crc kubenswrapper[4787]: E0219 19:22:54.845917 4787 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 19:22:55 crc kubenswrapper[4787]: E0219 19:22:55.604897 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Feb 19 19:22:57 crc kubenswrapper[4787]: E0219 19:22:57.205983 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.493598 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" containerName="oauth-openshift" containerID="cri-o://be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7" gracePeriod=15 Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.865826 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.867029 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.867288 4787 status_manager.go:851] "Failed to get status for pod" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gcqj4\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.891106 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.892692 4787 status_manager.go:851] "Failed to get status for pod" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gcqj4\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.893352 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.908801 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.908848 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:22:59 crc kubenswrapper[4787]: E0219 19:22:59.909544 4787 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.910303 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.976652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-error\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.976714 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-service-ca\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.976753 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-policies\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.976789 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-ocp-branding-template\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.976823 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-provider-selection\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.976858 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-login\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.977179 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-idp-0-file-data\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.977223 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-session\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.977244 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-router-certs\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.977264 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-trusted-ca-bundle\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.977646 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978021 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-serving-cert\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978046 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-dir\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978067 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-cliconfig\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978096 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh956\" (UniqueName: \"kubernetes.io/projected/ac7ae820-3827-442c-83b4-aad43aa9e383-kube-api-access-rh956\") pod \"ac7ae820-3827-442c-83b4-aad43aa9e383\" (UID: \"ac7ae820-3827-442c-83b4-aad43aa9e383\") " Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978280 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978275 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978257 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978584 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.978749 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.982876 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.983227 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.983504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.983677 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.983945 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac7ae820-3827-442c-83b4-aad43aa9e383-kube-api-access-rh956" (OuterVolumeSpecName: "kube-api-access-rh956") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "kube-api-access-rh956". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.983968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.984115 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.984429 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:22:59 crc kubenswrapper[4787]: I0219 19:22:59.984476 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ac7ae820-3827-442c-83b4-aad43aa9e383" (UID: "ac7ae820-3827-442c-83b4-aad43aa9e383"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079314 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079355 4787 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079373 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079387 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh956\" (UniqueName: \"kubernetes.io/projected/ac7ae820-3827-442c-83b4-aad43aa9e383-kube-api-access-rh956\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079401 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079414 4787 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079427 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079446 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079459 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079472 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079484 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079497 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.079509 4787 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac7ae820-3827-442c-83b4-aad43aa9e383-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:00 crc kubenswrapper[4787]: E0219 19:23:00.408781 4787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.486035 4787 generic.go:334] "Generic (PLEG): container finished" podID="ac7ae820-3827-442c-83b4-aad43aa9e383" containerID="be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7" exitCode=0 Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.486130 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.486139 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" event={"ID":"ac7ae820-3827-442c-83b4-aad43aa9e383","Type":"ContainerDied","Data":"be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7"} Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.486257 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" event={"ID":"ac7ae820-3827-442c-83b4-aad43aa9e383","Type":"ContainerDied","Data":"e3ffdde6a9da8d7f8a6df5d156f7c30c241e1f987509377ad70e4c101244d39e"} Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.486292 4787 scope.go:117] "RemoveContainer" containerID="be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.487181 4787 status_manager.go:851] "Failed to get status for pod" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gcqj4\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.487786 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.488756 4787 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="27dfd879ebe41a9c7b74af25eaa69193527c9cb11b9fa64aedf070b656888119" exitCode=0 Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.488811 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"27dfd879ebe41a9c7b74af25eaa69193527c9cb11b9fa64aedf070b656888119"} Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.488883 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eea330be32d2e1d202da1ec883e967b8d5601bb39ad72f1062f1c7bc8e944acf"} Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.489236 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.489263 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.489553 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:23:00 crc kubenswrapper[4787]: E0219 19:23:00.489861 4787 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.489903 4787 status_manager.go:851] "Failed to get status for pod" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gcqj4\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.502354 4787 status_manager.go:851] "Failed to get status for pod" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.503074 4787 status_manager.go:851] "Failed to get status for pod" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" pod="openshift-authentication/oauth-openshift-558db77b4-gcqj4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gcqj4\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.504997 4787 scope.go:117] "RemoveContainer" containerID="be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7" Feb 19 19:23:00 crc kubenswrapper[4787]: E0219 19:23:00.505501 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7\": container with ID starting with be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7 not found: ID does not exist" containerID="be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7" Feb 19 19:23:00 crc kubenswrapper[4787]: I0219 19:23:00.505539 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7"} err="failed to get container status \"be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7\": rpc error: code = NotFound desc = could not find container \"be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7\": container with ID starting with be90241bb367476b8616eb1a93a314dac5bd4a8e802868c27a355f75063805d7 not found: ID does not exist" Feb 19 19:23:01 crc kubenswrapper[4787]: I0219 19:23:01.500679 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2c4a71ec4c247ab643484a03c2019564a76cb278bfb24607d2f6fdd7001e5e76"} Feb 19 19:23:01 crc kubenswrapper[4787]: I0219 19:23:01.501187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0316a73ceb84b5bd6b4176918bdf96f2713c3d07a846576d27c0698b75e21190"} Feb 19 19:23:01 crc kubenswrapper[4787]: I0219 19:23:01.501203 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f9cf6eb7a104c56280b46e1efa3372eaaeae2397293cf432cb0d7515bf314f1"} Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.509855 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d116c316934bd70732442816a5e61a816f1ab299139f222db7b03d89963e7f5"} Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.509935 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c348c4c09a8bd76bdeb7956adfe8be472846086972daa2ab23cdc556be3541c5"} Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.510049 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.510201 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.510230 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.513589 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.513673 4787 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9" exitCode=1 Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.513718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9"} Feb 19 19:23:02 crc kubenswrapper[4787]: I0219 19:23:02.514330 4787 scope.go:117] "RemoveContainer" containerID="c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9" Feb 19 19:23:03 crc kubenswrapper[4787]: I0219 19:23:03.525316 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 19:23:03 crc kubenswrapper[4787]: I0219 19:23:03.525696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a08f2349a8d49bd383839dd173b1db02d197e97761d1684a1a68fb824c76d5e"} Feb 19 19:23:04 crc kubenswrapper[4787]: I0219 19:23:04.910965 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:04 crc kubenswrapper[4787]: I0219 19:23:04.911064 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:04 crc kubenswrapper[4787]: I0219 19:23:04.918123 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:07 crc kubenswrapper[4787]: I0219 19:23:07.523077 4787 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:07 crc kubenswrapper[4787]: I0219 19:23:07.553754 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:07 crc kubenswrapper[4787]: I0219 19:23:07.553802 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:07 crc kubenswrapper[4787]: I0219 19:23:07.558927 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:07 crc kubenswrapper[4787]: I0219 19:23:07.600681 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="cb52c61f-91da-4367-8458-6ff0d1e7309a" Feb 19 19:23:08 crc kubenswrapper[4787]: I0219 19:23:08.558464 4787 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:08 crc kubenswrapper[4787]: I0219 19:23:08.559009 4787 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="39b00336-c0de-40ff-ac4e-ab902c952805" Feb 19 19:23:08 crc kubenswrapper[4787]: I0219 19:23:08.564295 4787 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="cb52c61f-91da-4367-8458-6ff0d1e7309a" Feb 19 19:23:09 crc kubenswrapper[4787]: I0219 19:23:09.212294 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:23:11 crc kubenswrapper[4787]: I0219 19:23:11.991417 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:23:11 crc kubenswrapper[4787]: I0219 19:23:11.998224 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:23:12 crc kubenswrapper[4787]: I0219 19:23:12.591836 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 19:23:17 crc kubenswrapper[4787]: I0219 19:23:17.466155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 19:23:17 crc kubenswrapper[4787]: I0219 19:23:17.502403 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 19:23:18 crc kubenswrapper[4787]: I0219 19:23:18.554949 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 19:23:18 crc kubenswrapper[4787]: I0219 19:23:18.828652 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.238265 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.274572 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.349364 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.494153 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.630115 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.672450 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.676339 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.693368 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.751470 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.785028 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.828644 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 19:23:19 crc kubenswrapper[4787]: I0219 19:23:19.938687 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.131538 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.295828 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.406233 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.431421 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.494202 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.520725 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.540633 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.594732 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.610795 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.639123 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.809274 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 19:23:20 crc kubenswrapper[4787]: I0219 19:23:20.938251 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.020562 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.157906 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.237884 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.364586 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.457114 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.687389 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.738766 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.802599 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 19:23:21 crc kubenswrapper[4787]: I0219 19:23:21.906160 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.067668 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.181288 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.409714 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.470056 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.475713 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.502958 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.709597 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.724315 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.746951 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.771052 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.773496 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.827955 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.888074 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 19:23:22 crc kubenswrapper[4787]: I0219 19:23:22.970269 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.073542 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.193887 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.245589 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.258856 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.259947 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.261396 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.341111 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.369533 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.386927 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.481273 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.526877 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.531085 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.614370 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.618063 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.643165 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.663806 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.668575 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.682148 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.860761 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.879405 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 19:23:23 crc kubenswrapper[4787]: I0219 19:23:23.986467 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.028266 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.150400 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.173124 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.245658 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.274415 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.274420 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.276387 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.289306 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.308518 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.352901 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.380687 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.424226 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.461386 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.699355 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.699514 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.789639 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.814279 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.852878 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.856624 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.872287 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.902767 4787 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.910000 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-gcqj4"] Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.910068 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.915620 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.941200 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.941153187 podStartE2EDuration="17.941153187s" podCreationTimestamp="2026-02-19 19:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:23:24.931050289 +0000 UTC m=+272.721716231" watchObservedRunningTime="2026-02-19 19:23:24.941153187 +0000 UTC m=+272.731819169" Feb 19 19:23:24 crc kubenswrapper[4787]: I0219 19:23:24.974989 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.013193 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.017466 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.028377 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.048455 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.059883 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.060227 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.105686 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.150729 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.212790 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.220500 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.261559 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.380687 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.452135 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.554151 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.585008 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.619035 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.619536 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.627967 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.655580 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.696461 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.729583 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.797430 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.850765 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.877074 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.884771 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.921892 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 19:23:25 crc kubenswrapper[4787]: I0219 19:23:25.969001 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.036203 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.332495 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.365977 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.431670 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.814324 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.836719 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.855345 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.858340 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.866767 4787 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.902198 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" path="/var/lib/kubelet/pods/ac7ae820-3827-442c-83b4-aad43aa9e383/volumes" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.912134 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 19:23:26 crc kubenswrapper[4787]: I0219 19:23:26.976056 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.124668 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.197113 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.213867 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.291045 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.344435 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.399454 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.470133 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.510367 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.514395 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.599338 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.630992 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.631325 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.631447 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.637592 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.790813 4787 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.881325 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.888282 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 19:23:27 crc kubenswrapper[4787]: I0219 19:23:27.950463 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.018458 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.070808 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.186354 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.197568 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.285817 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.291349 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.325487 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.352404 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.395420 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-57bcd9fbb-4clxf"] Feb 19 19:23:28 crc kubenswrapper[4787]: E0219 19:23:28.396042 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" containerName="installer" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.396153 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" containerName="installer" Feb 19 19:23:28 crc kubenswrapper[4787]: E0219 19:23:28.396226 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" containerName="oauth-openshift" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.396293 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" containerName="oauth-openshift" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.396485 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="770b4095-ecd6-4b8e-8af0-b19beaba951c" containerName="installer" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.396565 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac7ae820-3827-442c-83b4-aad43aa9e383" containerName="oauth-openshift" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.397178 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.398964 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.400463 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.400556 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.400685 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.403170 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.403642 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.403755 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.403809 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.403927 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.403931 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.404899 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.406210 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.412907 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57bcd9fbb-4clxf"] Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.415051 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.416319 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.422884 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.432196 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.484868 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.540567 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586150 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfsqj\" (UniqueName: \"kubernetes.io/projected/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-kube-api-access-vfsqj\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586224 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-audit-policies\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586304 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-service-ca\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586464 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586493 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586520 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-audit-dir\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586557 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-error\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586584 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586630 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-login\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586665 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-session\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.586699 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-router-certs\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.651993 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.684218 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.687872 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-router-certs\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.687932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfsqj\" (UniqueName: \"kubernetes.io/projected/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-kube-api-access-vfsqj\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.687961 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-audit-policies\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.687984 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688014 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688099 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-service-ca\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688123 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688154 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-audit-dir\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688214 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-error\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688238 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688262 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-login\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688290 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-session\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.688647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-audit-dir\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.689676 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-audit-policies\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.689876 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.690460 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-service-ca\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.690688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.696687 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.696952 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-session\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.697461 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.697559 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-login\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.699956 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-user-template-error\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.700169 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.700704 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-router-certs\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.700782 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.712544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfsqj\" (UniqueName: \"kubernetes.io/projected/4b08776c-d5da-4eba-b7bf-9a6e0c56c181-kube-api-access-vfsqj\") pod \"oauth-openshift-57bcd9fbb-4clxf\" (UID: \"4b08776c-d5da-4eba-b7bf-9a6e0c56c181\") " pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.721726 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.796276 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.841188 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.854274 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 19:23:28 crc kubenswrapper[4787]: I0219 19:23:28.902444 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.134267 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.191193 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.288048 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.291691 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.355490 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.357253 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.371395 4787 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.386429 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.466549 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.489710 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.539827 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.620403 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.679952 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.685427 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.710332 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.744718 4787 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.777844 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.915985 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.933831 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.950149 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.954439 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.992086 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.995217 4787 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 19:23:29 crc kubenswrapper[4787]: I0219 19:23:29.995552 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750" gracePeriod=5 Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.020023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57bcd9fbb-4clxf"] Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.119459 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.184967 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.198980 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.202893 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.212088 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.257225 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.384670 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.404209 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.443627 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.445541 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.446835 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.516108 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.560005 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.577792 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.623315 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.689406 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.726724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" event={"ID":"4b08776c-d5da-4eba-b7bf-9a6e0c56c181","Type":"ContainerStarted","Data":"ddda0ce7693ca701b82925a736ed7acce0bc9988e603967e2bf37f4233e39f3d"} Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.726784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" event={"ID":"4b08776c-d5da-4eba-b7bf-9a6e0c56c181","Type":"ContainerStarted","Data":"e7fe6026b92cf667272d667c96b9fe1559db0d8f4fb84beb1674ea0e2ed0e003"} Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.726982 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.734334 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.752160 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" podStartSLOduration=56.752135156 podStartE2EDuration="56.752135156s" podCreationTimestamp="2026-02-19 19:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:23:30.750170238 +0000 UTC m=+278.540836200" watchObservedRunningTime="2026-02-19 19:23:30.752135156 +0000 UTC m=+278.542801098" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.810590 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 19:23:30 crc kubenswrapper[4787]: I0219 19:23:30.971102 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.011885 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.141909 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.164031 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.230132 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.261599 4787 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.269057 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.278686 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.583632 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.739515 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 19:23:31 crc kubenswrapper[4787]: I0219 19:23:31.756950 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.009290 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.086855 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.094536 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.169428 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.464777 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.568861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.638304 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.723778 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.732790 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.752648 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 19:23:32 crc kubenswrapper[4787]: I0219 19:23:32.978127 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 19:23:33 crc kubenswrapper[4787]: I0219 19:23:33.031579 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 19:23:33 crc kubenswrapper[4787]: I0219 19:23:33.079599 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 19:23:33 crc kubenswrapper[4787]: I0219 19:23:33.379674 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 19:23:33 crc kubenswrapper[4787]: I0219 19:23:33.421434 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 19:23:33 crc kubenswrapper[4787]: I0219 19:23:33.927861 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 19:23:33 crc kubenswrapper[4787]: I0219 19:23:33.977046 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 19:23:34 crc kubenswrapper[4787]: I0219 19:23:34.187509 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 19:23:34 crc kubenswrapper[4787]: I0219 19:23:34.775496 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.593222 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.593739 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.768825 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.768896 4787 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750" exitCode=137 Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.768959 4787 scope.go:117] "RemoveContainer" containerID="fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.769013 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.790934 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791044 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791090 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791117 4787 scope.go:117] "RemoveContainer" containerID="fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791219 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791244 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791405 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791638 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791817 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791820 4787 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.791867 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:23:35 crc kubenswrapper[4787]: E0219 19:23:35.792125 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750\": container with ID starting with fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750 not found: ID does not exist" containerID="fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.792195 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750"} err="failed to get container status \"fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750\": rpc error: code = NotFound desc = could not find container \"fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750\": container with ID starting with fc7026e065ec4262eea484ed0daa8b47dfa88e64cd6aa407434355476aa95750 not found: ID does not exist" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.802409 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.893297 4787 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.893355 4787 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.893367 4787 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:35 crc kubenswrapper[4787]: I0219 19:23:35.893378 4787 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 19:23:36 crc kubenswrapper[4787]: I0219 19:23:36.526155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 19:23:36 crc kubenswrapper[4787]: I0219 19:23:36.901701 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 19:23:51 crc kubenswrapper[4787]: I0219 19:23:51.868930 4787 generic.go:334] "Generic (PLEG): container finished" podID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerID="79201305a4650fb99ade5315e49a033c38cfe72be01771742bc934ae9117c973" exitCode=0 Feb 19 19:23:51 crc kubenswrapper[4787]: I0219 19:23:51.869029 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" event={"ID":"b0a66bdd-41eb-4f60-9b98-d4d1705347da","Type":"ContainerDied","Data":"79201305a4650fb99ade5315e49a033c38cfe72be01771742bc934ae9117c973"} Feb 19 19:23:51 crc kubenswrapper[4787]: I0219 19:23:51.870544 4787 scope.go:117] "RemoveContainer" containerID="79201305a4650fb99ade5315e49a033c38cfe72be01771742bc934ae9117c973" Feb 19 19:23:52 crc kubenswrapper[4787]: I0219 19:23:52.692040 4787 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 19:23:52 crc kubenswrapper[4787]: I0219 19:23:52.878312 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" event={"ID":"b0a66bdd-41eb-4f60-9b98-d4d1705347da","Type":"ContainerStarted","Data":"d8c6ea0e23dd19cb03ac68fb3e046c1b5b19dc28ca13f2c1f3d984647de38689"} Feb 19 19:23:52 crc kubenswrapper[4787]: I0219 19:23:52.879029 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:23:52 crc kubenswrapper[4787]: I0219 19:23:52.881093 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.295364 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hl7xp"] Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.296543 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" podUID="e08e5866-afcd-4355-a978-894053fa1cb3" containerName="controller-manager" containerID="cri-o://388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5" gracePeriod=30 Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.469633 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5"] Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.469941 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" podUID="840e5c14-8d41-4276-b4d4-b4eb62898080" containerName="route-controller-manager" containerID="cri-o://f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439" gracePeriod=30 Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.824067 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.855367 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxlqd\" (UniqueName: \"kubernetes.io/projected/e08e5866-afcd-4355-a978-894053fa1cb3-kube-api-access-kxlqd\") pod \"e08e5866-afcd-4355-a978-894053fa1cb3\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.855681 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-proxy-ca-bundles\") pod \"e08e5866-afcd-4355-a978-894053fa1cb3\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.855779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-client-ca\") pod \"e08e5866-afcd-4355-a978-894053fa1cb3\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.855804 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08e5866-afcd-4355-a978-894053fa1cb3-serving-cert\") pod \"e08e5866-afcd-4355-a978-894053fa1cb3\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.855875 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-config\") pod \"e08e5866-afcd-4355-a978-894053fa1cb3\" (UID: \"e08e5866-afcd-4355-a978-894053fa1cb3\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.857022 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-client-ca" (OuterVolumeSpecName: "client-ca") pod "e08e5866-afcd-4355-a978-894053fa1cb3" (UID: "e08e5866-afcd-4355-a978-894053fa1cb3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.857038 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e08e5866-afcd-4355-a978-894053fa1cb3" (UID: "e08e5866-afcd-4355-a978-894053fa1cb3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.857076 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-config" (OuterVolumeSpecName: "config") pod "e08e5866-afcd-4355-a978-894053fa1cb3" (UID: "e08e5866-afcd-4355-a978-894053fa1cb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.863650 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e08e5866-afcd-4355-a978-894053fa1cb3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e08e5866-afcd-4355-a978-894053fa1cb3" (UID: "e08e5866-afcd-4355-a978-894053fa1cb3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.864483 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08e5866-afcd-4355-a978-894053fa1cb3-kube-api-access-kxlqd" (OuterVolumeSpecName: "kube-api-access-kxlqd") pod "e08e5866-afcd-4355-a978-894053fa1cb3" (UID: "e08e5866-afcd-4355-a978-894053fa1cb3"). InnerVolumeSpecName "kube-api-access-kxlqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.869805 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.957759 4787 generic.go:334] "Generic (PLEG): container finished" podID="840e5c14-8d41-4276-b4d4-b4eb62898080" containerID="f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439" exitCode=0 Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.957862 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.957856 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" event={"ID":"840e5c14-8d41-4276-b4d4-b4eb62898080","Type":"ContainerDied","Data":"f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439"} Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.958407 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5" event={"ID":"840e5c14-8d41-4276-b4d4-b4eb62898080","Type":"ContainerDied","Data":"bcbd96ea1961434d299d5cf05aa67620cdda5c9f56b5709d1f75071e282892fb"} Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.958436 4787 scope.go:117] "RemoveContainer" containerID="f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.957910 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk47r\" (UniqueName: \"kubernetes.io/projected/840e5c14-8d41-4276-b4d4-b4eb62898080-kube-api-access-sk47r\") pod \"840e5c14-8d41-4276-b4d4-b4eb62898080\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.958805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-client-ca\") pod \"840e5c14-8d41-4276-b4d4-b4eb62898080\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.958891 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840e5c14-8d41-4276-b4d4-b4eb62898080-serving-cert\") pod \"840e5c14-8d41-4276-b4d4-b4eb62898080\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.958928 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-config\") pod \"840e5c14-8d41-4276-b4d4-b4eb62898080\" (UID: \"840e5c14-8d41-4276-b4d4-b4eb62898080\") " Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.959419 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-client-ca" (OuterVolumeSpecName: "client-ca") pod "840e5c14-8d41-4276-b4d4-b4eb62898080" (UID: "840e5c14-8d41-4276-b4d4-b4eb62898080"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.959964 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-config" (OuterVolumeSpecName: "config") pod "840e5c14-8d41-4276-b4d4-b4eb62898080" (UID: "840e5c14-8d41-4276-b4d4-b4eb62898080"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.960372 4787 generic.go:334] "Generic (PLEG): container finished" podID="e08e5866-afcd-4355-a978-894053fa1cb3" containerID="388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5" exitCode=0 Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.960432 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" event={"ID":"e08e5866-afcd-4355-a978-894053fa1cb3","Type":"ContainerDied","Data":"388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5"} Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.960468 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" event={"ID":"e08e5866-afcd-4355-a978-894053fa1cb3","Type":"ContainerDied","Data":"afad83d09882d4a4ef91585be014d43673079c6baf6803943ba54905b57ca079"} Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.960542 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hl7xp" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961020 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961357 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961556 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxlqd\" (UniqueName: \"kubernetes.io/projected/e08e5866-afcd-4355-a978-894053fa1cb3-kube-api-access-kxlqd\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961579 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840e5c14-8d41-4276-b4d4-b4eb62898080-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961594 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961627 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e08e5866-afcd-4355-a978-894053fa1cb3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.961642 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08e5866-afcd-4355-a978-894053fa1cb3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.964395 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840e5c14-8d41-4276-b4d4-b4eb62898080-kube-api-access-sk47r" (OuterVolumeSpecName: "kube-api-access-sk47r") pod "840e5c14-8d41-4276-b4d4-b4eb62898080" (UID: "840e5c14-8d41-4276-b4d4-b4eb62898080"). InnerVolumeSpecName "kube-api-access-sk47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.964444 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840e5c14-8d41-4276-b4d4-b4eb62898080-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "840e5c14-8d41-4276-b4d4-b4eb62898080" (UID: "840e5c14-8d41-4276-b4d4-b4eb62898080"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.978398 4787 scope.go:117] "RemoveContainer" containerID="f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439" Feb 19 19:24:05 crc kubenswrapper[4787]: E0219 19:24:05.978878 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439\": container with ID starting with f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439 not found: ID does not exist" containerID="f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.978911 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439"} err="failed to get container status \"f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439\": rpc error: code = NotFound desc = could not find container \"f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439\": container with ID starting with f3cc15318b08dea96505098ab2fe4ef5a60ee3a0f5f8f837f672180cdd8dc439 not found: ID does not exist" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.978933 4787 scope.go:117] "RemoveContainer" containerID="388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5" Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.992335 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hl7xp"] Feb 19 19:24:05 crc kubenswrapper[4787]: I0219 19:24:05.997433 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hl7xp"] Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.000646 4787 scope.go:117] "RemoveContainer" containerID="388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5" Feb 19 19:24:06 crc kubenswrapper[4787]: E0219 19:24:06.001379 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5\": container with ID starting with 388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5 not found: ID does not exist" containerID="388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.001412 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5"} err="failed to get container status \"388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5\": rpc error: code = NotFound desc = could not find container \"388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5\": container with ID starting with 388f1d7897f85b17b078400bcdce5c06a96489503b9b2e434149536f00c944e5 not found: ID does not exist" Feb 19 19:24:06 crc kubenswrapper[4787]: E0219 19:24:06.057810 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08e5866_afcd_4355_a978_894053fa1cb3.slice\": RecentStats: unable to find data in memory cache]" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.063825 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk47r\" (UniqueName: \"kubernetes.io/projected/840e5c14-8d41-4276-b4d4-b4eb62898080-kube-api-access-sk47r\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.063874 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840e5c14-8d41-4276-b4d4-b4eb62898080-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.288281 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5"] Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.292368 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zz4h5"] Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.824572 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw"] Feb 19 19:24:06 crc kubenswrapper[4787]: E0219 19:24:06.824938 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08e5866-afcd-4355-a978-894053fa1cb3" containerName="controller-manager" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.824958 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08e5866-afcd-4355-a978-894053fa1cb3" containerName="controller-manager" Feb 19 19:24:06 crc kubenswrapper[4787]: E0219 19:24:06.824967 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.824975 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 19:24:06 crc kubenswrapper[4787]: E0219 19:24:06.824998 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840e5c14-8d41-4276-b4d4-b4eb62898080" containerName="route-controller-manager" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.825006 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="840e5c14-8d41-4276-b4d4-b4eb62898080" containerName="route-controller-manager" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.825134 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.825153 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08e5866-afcd-4355-a978-894053fa1cb3" containerName="controller-manager" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.825163 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="840e5c14-8d41-4276-b4d4-b4eb62898080" containerName="route-controller-manager" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.825669 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.827917 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.828540 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.828853 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-844c996ccc-l722n"] Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.829161 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.829360 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.829577 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.829726 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.830558 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.835131 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.835424 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.835558 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.835964 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.840879 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-844c996ccc-l722n"] Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.844374 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.844684 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.846329 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.848625 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw"] Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876119 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-client-ca\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-proxy-ca-bundles\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876199 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-client-ca\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876237 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f99eb6-3c35-415e-a191-2b06bd1051a6-serving-cert\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-config\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876301 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d10d5653-e474-4f1c-ba9e-b57227f8c465-serving-cert\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876318 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-config\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876337 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbmc\" (UniqueName: \"kubernetes.io/projected/d10d5653-e474-4f1c-ba9e-b57227f8c465-kube-api-access-bcbmc\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.876360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxfsx\" (UniqueName: \"kubernetes.io/projected/b3f99eb6-3c35-415e-a191-2b06bd1051a6-kube-api-access-kxfsx\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.900820 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840e5c14-8d41-4276-b4d4-b4eb62898080" path="/var/lib/kubelet/pods/840e5c14-8d41-4276-b4d4-b4eb62898080/volumes" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.901518 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e08e5866-afcd-4355-a978-894053fa1cb3" path="/var/lib/kubelet/pods/e08e5866-afcd-4355-a978-894053fa1cb3/volumes" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.978169 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-client-ca\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.978221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-proxy-ca-bundles\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.978244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-client-ca\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.978991 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f99eb6-3c35-415e-a191-2b06bd1051a6-serving-cert\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.979590 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-client-ca\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.979716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-config\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.980018 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d10d5653-e474-4f1c-ba9e-b57227f8c465-serving-cert\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.980060 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-config\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.980111 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcbmc\" (UniqueName: \"kubernetes.io/projected/d10d5653-e474-4f1c-ba9e-b57227f8c465-kube-api-access-bcbmc\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.980145 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxfsx\" (UniqueName: \"kubernetes.io/projected/b3f99eb6-3c35-415e-a191-2b06bd1051a6-kube-api-access-kxfsx\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.980671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-proxy-ca-bundles\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.981054 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-client-ca\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.982508 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-config\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.984075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f99eb6-3c35-415e-a191-2b06bd1051a6-serving-cert\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.984566 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d10d5653-e474-4f1c-ba9e-b57227f8c465-serving-cert\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:06 crc kubenswrapper[4787]: I0219 19:24:06.985061 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-config\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.003965 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcbmc\" (UniqueName: \"kubernetes.io/projected/d10d5653-e474-4f1c-ba9e-b57227f8c465-kube-api-access-bcbmc\") pod \"controller-manager-844c996ccc-l722n\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.004147 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxfsx\" (UniqueName: \"kubernetes.io/projected/b3f99eb6-3c35-415e-a191-2b06bd1051a6-kube-api-access-kxfsx\") pod \"route-controller-manager-6cdd6d95cc-4v8sw\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.145685 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.156136 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.363800 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw"] Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.408018 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-844c996ccc-l722n"] Feb 19 19:24:07 crc kubenswrapper[4787]: W0219 19:24:07.420325 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd10d5653_e474_4f1c_ba9e_b57227f8c465.slice/crio-5f63d6468937a60604fde3b1da703d949a4728a76217208b2332e513af566327 WatchSource:0}: Error finding container 5f63d6468937a60604fde3b1da703d949a4728a76217208b2332e513af566327: Status 404 returned error can't find the container with id 5f63d6468937a60604fde3b1da703d949a4728a76217208b2332e513af566327 Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.979681 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" event={"ID":"b3f99eb6-3c35-415e-a191-2b06bd1051a6","Type":"ContainerStarted","Data":"9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38"} Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.979758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" event={"ID":"b3f99eb6-3c35-415e-a191-2b06bd1051a6","Type":"ContainerStarted","Data":"5051dfc66a7cb2870e33182ab060bf4aff900a9eb076d34941be303cd1755a8d"} Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.980107 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.983595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" event={"ID":"d10d5653-e474-4f1c-ba9e-b57227f8c465","Type":"ContainerStarted","Data":"fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c"} Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.983668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" event={"ID":"d10d5653-e474-4f1c-ba9e-b57227f8c465","Type":"ContainerStarted","Data":"5f63d6468937a60604fde3b1da703d949a4728a76217208b2332e513af566327"} Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.983862 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:07 crc kubenswrapper[4787]: I0219 19:24:07.988497 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:08 crc kubenswrapper[4787]: I0219 19:24:08.026111 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" podStartSLOduration=3.026083594 podStartE2EDuration="3.026083594s" podCreationTimestamp="2026-02-19 19:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:24:08.007012481 +0000 UTC m=+315.797678413" watchObservedRunningTime="2026-02-19 19:24:08.026083594 +0000 UTC m=+315.816749526" Feb 19 19:24:08 crc kubenswrapper[4787]: I0219 19:24:08.028256 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" podStartSLOduration=3.028250398 podStartE2EDuration="3.028250398s" podCreationTimestamp="2026-02-19 19:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:24:08.024185528 +0000 UTC m=+315.814851470" watchObservedRunningTime="2026-02-19 19:24:08.028250398 +0000 UTC m=+315.818916340" Feb 19 19:24:08 crc kubenswrapper[4787]: I0219 19:24:08.081863 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:39 crc kubenswrapper[4787]: I0219 19:24:39.263267 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:24:39 crc kubenswrapper[4787]: I0219 19:24:39.264292 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.282803 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844c996ccc-l722n"] Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.283577 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" podUID="d10d5653-e474-4f1c-ba9e-b57227f8c465" containerName="controller-manager" containerID="cri-o://fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c" gracePeriod=30 Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.289402 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw"] Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.289684 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" podUID="b3f99eb6-3c35-415e-a191-2b06bd1051a6" containerName="route-controller-manager" containerID="cri-o://9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38" gracePeriod=30 Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.778320 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.785428 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.835877 4787 generic.go:334] "Generic (PLEG): container finished" podID="b3f99eb6-3c35-415e-a191-2b06bd1051a6" containerID="9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38" exitCode=0 Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.836074 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.836598 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" event={"ID":"b3f99eb6-3c35-415e-a191-2b06bd1051a6","Type":"ContainerDied","Data":"9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38"} Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.836730 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw" event={"ID":"b3f99eb6-3c35-415e-a191-2b06bd1051a6","Type":"ContainerDied","Data":"5051dfc66a7cb2870e33182ab060bf4aff900a9eb076d34941be303cd1755a8d"} Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.836751 4787 scope.go:117] "RemoveContainer" containerID="9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.839194 4787 generic.go:334] "Generic (PLEG): container finished" podID="d10d5653-e474-4f1c-ba9e-b57227f8c465" containerID="fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c" exitCode=0 Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.839231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" event={"ID":"d10d5653-e474-4f1c-ba9e-b57227f8c465","Type":"ContainerDied","Data":"fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c"} Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.839264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" event={"ID":"d10d5653-e474-4f1c-ba9e-b57227f8c465","Type":"ContainerDied","Data":"5f63d6468937a60604fde3b1da703d949a4728a76217208b2332e513af566327"} Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.839325 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844c996ccc-l722n" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854302 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-config\") pod \"d10d5653-e474-4f1c-ba9e-b57227f8c465\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854376 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-proxy-ca-bundles\") pod \"d10d5653-e474-4f1c-ba9e-b57227f8c465\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcbmc\" (UniqueName: \"kubernetes.io/projected/d10d5653-e474-4f1c-ba9e-b57227f8c465-kube-api-access-bcbmc\") pod \"d10d5653-e474-4f1c-ba9e-b57227f8c465\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854450 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-client-ca\") pod \"d10d5653-e474-4f1c-ba9e-b57227f8c465\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854475 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxfsx\" (UniqueName: \"kubernetes.io/projected/b3f99eb6-3c35-415e-a191-2b06bd1051a6-kube-api-access-kxfsx\") pod \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854505 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f99eb6-3c35-415e-a191-2b06bd1051a6-serving-cert\") pod \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854529 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-client-ca\") pod \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-config\") pod \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\" (UID: \"b3f99eb6-3c35-415e-a191-2b06bd1051a6\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.854633 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d10d5653-e474-4f1c-ba9e-b57227f8c465-serving-cert\") pod \"d10d5653-e474-4f1c-ba9e-b57227f8c465\" (UID: \"d10d5653-e474-4f1c-ba9e-b57227f8c465\") " Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.857628 4787 scope.go:117] "RemoveContainer" containerID="9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.858118 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-config" (OuterVolumeSpecName: "config") pod "d10d5653-e474-4f1c-ba9e-b57227f8c465" (UID: "d10d5653-e474-4f1c-ba9e-b57227f8c465"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.858941 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-config" (OuterVolumeSpecName: "config") pod "b3f99eb6-3c35-415e-a191-2b06bd1051a6" (UID: "b3f99eb6-3c35-415e-a191-2b06bd1051a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.859015 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d10d5653-e474-4f1c-ba9e-b57227f8c465" (UID: "d10d5653-e474-4f1c-ba9e-b57227f8c465"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.859076 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-client-ca" (OuterVolumeSpecName: "client-ca") pod "b3f99eb6-3c35-415e-a191-2b06bd1051a6" (UID: "b3f99eb6-3c35-415e-a191-2b06bd1051a6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.859078 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-client-ca" (OuterVolumeSpecName: "client-ca") pod "d10d5653-e474-4f1c-ba9e-b57227f8c465" (UID: "d10d5653-e474-4f1c-ba9e-b57227f8c465"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.862117 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3f99eb6-3c35-415e-a191-2b06bd1051a6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b3f99eb6-3c35-415e-a191-2b06bd1051a6" (UID: "b3f99eb6-3c35-415e-a191-2b06bd1051a6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.862147 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10d5653-e474-4f1c-ba9e-b57227f8c465-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d10d5653-e474-4f1c-ba9e-b57227f8c465" (UID: "d10d5653-e474-4f1c-ba9e-b57227f8c465"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: E0219 19:24:45.862253 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38\": container with ID starting with 9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38 not found: ID does not exist" containerID="9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.862296 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38"} err="failed to get container status \"9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38\": rpc error: code = NotFound desc = could not find container \"9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38\": container with ID starting with 9bf7af963291188923760bedce38b2eb9b449c0da46d6e119cfb5c285c26ed38 not found: ID does not exist" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.862328 4787 scope.go:117] "RemoveContainer" containerID="fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.862350 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10d5653-e474-4f1c-ba9e-b57227f8c465-kube-api-access-bcbmc" (OuterVolumeSpecName: "kube-api-access-bcbmc") pod "d10d5653-e474-4f1c-ba9e-b57227f8c465" (UID: "d10d5653-e474-4f1c-ba9e-b57227f8c465"). InnerVolumeSpecName "kube-api-access-bcbmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.862276 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3f99eb6-3c35-415e-a191-2b06bd1051a6-kube-api-access-kxfsx" (OuterVolumeSpecName: "kube-api-access-kxfsx") pod "b3f99eb6-3c35-415e-a191-2b06bd1051a6" (UID: "b3f99eb6-3c35-415e-a191-2b06bd1051a6"). InnerVolumeSpecName "kube-api-access-kxfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.910320 4787 scope.go:117] "RemoveContainer" containerID="fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c" Feb 19 19:24:45 crc kubenswrapper[4787]: E0219 19:24:45.911009 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c\": container with ID starting with fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c not found: ID does not exist" containerID="fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.911063 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c"} err="failed to get container status \"fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c\": rpc error: code = NotFound desc = could not find container \"fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c\": container with ID starting with fe6230035e18d0cfa636e62a0a322fe523fc1d21b605aa72e39602eb52434e7c not found: ID does not exist" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956753 4787 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956807 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcbmc\" (UniqueName: \"kubernetes.io/projected/d10d5653-e474-4f1c-ba9e-b57227f8c465-kube-api-access-bcbmc\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956824 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956845 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxfsx\" (UniqueName: \"kubernetes.io/projected/b3f99eb6-3c35-415e-a191-2b06bd1051a6-kube-api-access-kxfsx\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956860 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3f99eb6-3c35-415e-a191-2b06bd1051a6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956874 4787 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956890 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f99eb6-3c35-415e-a191-2b06bd1051a6-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956941 4787 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d10d5653-e474-4f1c-ba9e-b57227f8c465-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:45 crc kubenswrapper[4787]: I0219 19:24:45.956956 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10d5653-e474-4f1c-ba9e-b57227f8c465-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.172635 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844c996ccc-l722n"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.180339 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-844c996ccc-l722n"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.196955 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.202902 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cdd6d95cc-4v8sw"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.858063 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85564cdc67-nsqlz"] Feb 19 19:24:46 crc kubenswrapper[4787]: E0219 19:24:46.858874 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3f99eb6-3c35-415e-a191-2b06bd1051a6" containerName="route-controller-manager" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.858890 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3f99eb6-3c35-415e-a191-2b06bd1051a6" containerName="route-controller-manager" Feb 19 19:24:46 crc kubenswrapper[4787]: E0219 19:24:46.858900 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10d5653-e474-4f1c-ba9e-b57227f8c465" containerName="controller-manager" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.858906 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10d5653-e474-4f1c-ba9e-b57227f8c465" containerName="controller-manager" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.859011 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3f99eb6-3c35-415e-a191-2b06bd1051a6" containerName="route-controller-manager" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.859025 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10d5653-e474-4f1c-ba9e-b57227f8c465" containerName="controller-manager" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.859490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.864911 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.865120 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.865399 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.865567 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.866322 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.866770 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-config\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.866807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-proxy-ca-bundles\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.866830 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fz8r\" (UniqueName: \"kubernetes.io/projected/6984a09d-3652-4db8-bae6-874ba82dd3a6-kube-api-access-5fz8r\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.866871 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6984a09d-3652-4db8-bae6-874ba82dd3a6-serving-cert\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.866890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-client-ca\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.865782 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.867891 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.869127 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.870708 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.874811 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.874853 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.875114 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.875275 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.876064 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85564cdc67-nsqlz"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.901125 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.901578 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.935706 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3f99eb6-3c35-415e-a191-2b06bd1051a6" path="/var/lib/kubelet/pods/b3f99eb6-3c35-415e-a191-2b06bd1051a6/volumes" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.936744 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10d5653-e474-4f1c-ba9e-b57227f8c465" path="/var/lib/kubelet/pods/d10d5653-e474-4f1c-ba9e-b57227f8c465/volumes" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.937413 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x"] Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968014 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-client-ca\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968084 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-config\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-proxy-ca-bundles\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz8r\" (UniqueName: \"kubernetes.io/projected/6984a09d-3652-4db8-bae6-874ba82dd3a6-kube-api-access-5fz8r\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968189 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-config\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968245 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6984a09d-3652-4db8-bae6-874ba82dd3a6-serving-cert\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968266 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-client-ca\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68z6t\" (UniqueName: \"kubernetes.io/projected/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-kube-api-access-68z6t\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.968401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-serving-cert\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.969675 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-proxy-ca-bundles\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.970668 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-client-ca\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.971132 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6984a09d-3652-4db8-bae6-874ba82dd3a6-config\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.979841 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6984a09d-3652-4db8-bae6-874ba82dd3a6-serving-cert\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:46 crc kubenswrapper[4787]: I0219 19:24:46.986954 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz8r\" (UniqueName: \"kubernetes.io/projected/6984a09d-3652-4db8-bae6-874ba82dd3a6-kube-api-access-5fz8r\") pod \"controller-manager-85564cdc67-nsqlz\" (UID: \"6984a09d-3652-4db8-bae6-874ba82dd3a6\") " pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.070024 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68z6t\" (UniqueName: \"kubernetes.io/projected/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-kube-api-access-68z6t\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.070718 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-serving-cert\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.070835 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-client-ca\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.070985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-config\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.073163 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-config\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.075401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-client-ca\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.082703 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-serving-cert\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.099422 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68z6t\" (UniqueName: \"kubernetes.io/projected/b3ac3bda-4e18-4352-b2bf-ee28a2d059ca-kube-api-access-68z6t\") pod \"route-controller-manager-6644687b85-9nz2x\" (UID: \"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca\") " pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.227423 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.240160 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.485250 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x"] Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.529206 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85564cdc67-nsqlz"] Feb 19 19:24:47 crc kubenswrapper[4787]: W0219 19:24:47.554500 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6984a09d_3652_4db8_bae6_874ba82dd3a6.slice/crio-585d0a5429ecdb40811139ba990786968a25aeac3acd923298a8afa00b7e66f1 WatchSource:0}: Error finding container 585d0a5429ecdb40811139ba990786968a25aeac3acd923298a8afa00b7e66f1: Status 404 returned error can't find the container with id 585d0a5429ecdb40811139ba990786968a25aeac3acd923298a8afa00b7e66f1 Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.861623 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" event={"ID":"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca","Type":"ContainerStarted","Data":"8048f4b511d1c3fdd6209be6c4c9f442af8d958b912e62759dd0957a995e9d04"} Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.862018 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.862036 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" event={"ID":"b3ac3bda-4e18-4352-b2bf-ee28a2d059ca","Type":"ContainerStarted","Data":"530f07861e9fa7bea7e2b89d1c3e8b15a71a456b99306dc63cc65517da3f856e"} Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.863985 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" event={"ID":"6984a09d-3652-4db8-bae6-874ba82dd3a6","Type":"ContainerStarted","Data":"1b0c149213a599fd0da926e66ae9e1cd7e8d61df77f356e1b84a0ce0771d5bce"} Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.864060 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" event={"ID":"6984a09d-3652-4db8-bae6-874ba82dd3a6","Type":"ContainerStarted","Data":"585d0a5429ecdb40811139ba990786968a25aeac3acd923298a8afa00b7e66f1"} Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.864249 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.878388 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.881183 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" podStartSLOduration=2.881162112 podStartE2EDuration="2.881162112s" podCreationTimestamp="2026-02-19 19:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:24:47.879147612 +0000 UTC m=+355.669813554" watchObservedRunningTime="2026-02-19 19:24:47.881162112 +0000 UTC m=+355.671828054" Feb 19 19:24:47 crc kubenswrapper[4787]: I0219 19:24:47.905704 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" podStartSLOduration=2.905674106 podStartE2EDuration="2.905674106s" podCreationTimestamp="2026-02-19 19:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:24:47.90449893 +0000 UTC m=+355.695164902" watchObservedRunningTime="2026-02-19 19:24:47.905674106 +0000 UTC m=+355.696340048" Feb 19 19:24:48 crc kubenswrapper[4787]: I0219 19:24:48.146796 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.695905 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlcfq"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.697646 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlcfq" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="registry-server" containerID="cri-o://8f170bfb021aafed08c8c5bddea7912d0f4ad75b828942b6671a9d25c31e6420" gracePeriod=30 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.723498 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z49wh"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.723894 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z49wh" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="registry-server" containerID="cri-o://156a9e4aba0cb1ef65dee8cb9213059c714014631e47fd3656dccd812e2ab802" gracePeriod=30 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.741089 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kpxw8"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.741472 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" containerID="cri-o://d8c6ea0e23dd19cb03ac68fb3e046c1b5b19dc28ca13f2c1f3d984647de38689" gracePeriod=30 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.751907 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdqt5"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.752968 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.759322 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4msqr"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.759672 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4msqr" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="registry-server" containerID="cri-o://3e5bbf75ec506f0f45ff610e4a30398069cf420d0f30fb909211762cd07273ad" gracePeriod=30 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.766337 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nnvxf"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.766736 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nnvxf" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="registry-server" containerID="cri-o://f0e5a808dccb6982462a5a08145cf1fe70f11282edb16629acc02fa06767efce" gracePeriod=30 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.782372 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdqt5"] Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.841104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34c814cb-c6f7-48b1-8153-e532e5f71bc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.841185 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34c814cb-c6f7-48b1-8153-e532e5f71bc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.841244 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78xw\" (UniqueName: \"kubernetes.io/projected/34c814cb-c6f7-48b1-8153-e532e5f71bc1-kube-api-access-z78xw\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.943432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z78xw\" (UniqueName: \"kubernetes.io/projected/34c814cb-c6f7-48b1-8153-e532e5f71bc1-kube-api-access-z78xw\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.943976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34c814cb-c6f7-48b1-8153-e532e5f71bc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.944008 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34c814cb-c6f7-48b1-8153-e532e5f71bc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.948383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34c814cb-c6f7-48b1-8153-e532e5f71bc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.951528 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34c814cb-c6f7-48b1-8153-e532e5f71bc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.957393 4787 generic.go:334] "Generic (PLEG): container finished" podID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerID="d8c6ea0e23dd19cb03ac68fb3e046c1b5b19dc28ca13f2c1f3d984647de38689" exitCode=0 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.957495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" event={"ID":"b0a66bdd-41eb-4f60-9b98-d4d1705347da","Type":"ContainerDied","Data":"d8c6ea0e23dd19cb03ac68fb3e046c1b5b19dc28ca13f2c1f3d984647de38689"} Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.957539 4787 scope.go:117] "RemoveContainer" containerID="79201305a4650fb99ade5315e49a033c38cfe72be01771742bc934ae9117c973" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.964396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z78xw\" (UniqueName: \"kubernetes.io/projected/34c814cb-c6f7-48b1-8153-e532e5f71bc1-kube-api-access-z78xw\") pod \"marketplace-operator-79b997595-gdqt5\" (UID: \"34c814cb-c6f7-48b1-8153-e532e5f71bc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.976267 4787 generic.go:334] "Generic (PLEG): container finished" podID="bb979100-3b5a-45af-8985-e80f54babd63" containerID="f0e5a808dccb6982462a5a08145cf1fe70f11282edb16629acc02fa06767efce" exitCode=0 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.976348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerDied","Data":"f0e5a808dccb6982462a5a08145cf1fe70f11282edb16629acc02fa06767efce"} Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.980878 4787 generic.go:334] "Generic (PLEG): container finished" podID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerID="8f170bfb021aafed08c8c5bddea7912d0f4ad75b828942b6671a9d25c31e6420" exitCode=0 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.980954 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlcfq" event={"ID":"90aaa1d4-625b-4592-88b2-aad8f37a5dd8","Type":"ContainerDied","Data":"8f170bfb021aafed08c8c5bddea7912d0f4ad75b828942b6671a9d25c31e6420"} Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.988387 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerID="156a9e4aba0cb1ef65dee8cb9213059c714014631e47fd3656dccd812e2ab802" exitCode=0 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.988443 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z49wh" event={"ID":"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4","Type":"ContainerDied","Data":"156a9e4aba0cb1ef65dee8cb9213059c714014631e47fd3656dccd812e2ab802"} Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.992539 4787 generic.go:334] "Generic (PLEG): container finished" podID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerID="3e5bbf75ec506f0f45ff610e4a30398069cf420d0f30fb909211762cd07273ad" exitCode=0 Feb 19 19:24:57 crc kubenswrapper[4787]: I0219 19:24:57.992579 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4msqr" event={"ID":"51311b0a-7d74-4ee1-983c-3a48a521ded9","Type":"ContainerDied","Data":"3e5bbf75ec506f0f45ff610e4a30398069cf420d0f30fb909211762cd07273ad"} Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.153517 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.343890 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.460021 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mgqm\" (UniqueName: \"kubernetes.io/projected/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-kube-api-access-7mgqm\") pod \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.460561 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-utilities\") pod \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.460635 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-catalog-content\") pod \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\" (UID: \"90aaa1d4-625b-4592-88b2-aad8f37a5dd8\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.462643 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-utilities" (OuterVolumeSpecName: "utilities") pod "90aaa1d4-625b-4592-88b2-aad8f37a5dd8" (UID: "90aaa1d4-625b-4592-88b2-aad8f37a5dd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.483900 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-kube-api-access-7mgqm" (OuterVolumeSpecName: "kube-api-access-7mgqm") pod "90aaa1d4-625b-4592-88b2-aad8f37a5dd8" (UID: "90aaa1d4-625b-4592-88b2-aad8f37a5dd8"). InnerVolumeSpecName "kube-api-access-7mgqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.544629 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.558478 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.560061 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.561410 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-operator-metrics\") pod \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.561484 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-trusted-ca\") pod \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.561685 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgnz5\" (UniqueName: \"kubernetes.io/projected/b0a66bdd-41eb-4f60-9b98-d4d1705347da-kube-api-access-bgnz5\") pod \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\" (UID: \"b0a66bdd-41eb-4f60-9b98-d4d1705347da\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.562047 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mgqm\" (UniqueName: \"kubernetes.io/projected/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-kube-api-access-7mgqm\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.562070 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.562247 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b0a66bdd-41eb-4f60-9b98-d4d1705347da" (UID: "b0a66bdd-41eb-4f60-9b98-d4d1705347da"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.567529 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90aaa1d4-625b-4592-88b2-aad8f37a5dd8" (UID: "90aaa1d4-625b-4592-88b2-aad8f37a5dd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.568100 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a66bdd-41eb-4f60-9b98-d4d1705347da-kube-api-access-bgnz5" (OuterVolumeSpecName: "kube-api-access-bgnz5") pod "b0a66bdd-41eb-4f60-9b98-d4d1705347da" (UID: "b0a66bdd-41eb-4f60-9b98-d4d1705347da"). InnerVolumeSpecName "kube-api-access-bgnz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.571410 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b0a66bdd-41eb-4f60-9b98-d4d1705347da" (UID: "b0a66bdd-41eb-4f60-9b98-d4d1705347da"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.595032 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663305 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-utilities\") pod \"bb979100-3b5a-45af-8985-e80f54babd63\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663425 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-catalog-content\") pod \"51311b0a-7d74-4ee1-983c-3a48a521ded9\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663463 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24hhc\" (UniqueName: \"kubernetes.io/projected/bb979100-3b5a-45af-8985-e80f54babd63-kube-api-access-24hhc\") pod \"bb979100-3b5a-45af-8985-e80f54babd63\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663509 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-utilities\") pod \"51311b0a-7d74-4ee1-983c-3a48a521ded9\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-utilities\") pod \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663576 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtn45\" (UniqueName: \"kubernetes.io/projected/51311b0a-7d74-4ee1-983c-3a48a521ded9-kube-api-access-mtn45\") pod \"51311b0a-7d74-4ee1-983c-3a48a521ded9\" (UID: \"51311b0a-7d74-4ee1-983c-3a48a521ded9\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663595 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccz7f\" (UniqueName: \"kubernetes.io/projected/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-kube-api-access-ccz7f\") pod \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-catalog-content\") pod \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\" (UID: \"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663683 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-catalog-content\") pod \"bb979100-3b5a-45af-8985-e80f54babd63\" (UID: \"bb979100-3b5a-45af-8985-e80f54babd63\") " Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663910 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgnz5\" (UniqueName: \"kubernetes.io/projected/b0a66bdd-41eb-4f60-9b98-d4d1705347da-kube-api-access-bgnz5\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663926 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663941 4787 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0a66bdd-41eb-4f60-9b98-d4d1705347da-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.663952 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90aaa1d4-625b-4592-88b2-aad8f37a5dd8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.671482 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-utilities" (OuterVolumeSpecName: "utilities") pod "51311b0a-7d74-4ee1-983c-3a48a521ded9" (UID: "51311b0a-7d74-4ee1-983c-3a48a521ded9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.671792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-utilities" (OuterVolumeSpecName: "utilities") pod "d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" (UID: "d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.673156 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-utilities" (OuterVolumeSpecName: "utilities") pod "bb979100-3b5a-45af-8985-e80f54babd63" (UID: "bb979100-3b5a-45af-8985-e80f54babd63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.674672 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb979100-3b5a-45af-8985-e80f54babd63-kube-api-access-24hhc" (OuterVolumeSpecName: "kube-api-access-24hhc") pod "bb979100-3b5a-45af-8985-e80f54babd63" (UID: "bb979100-3b5a-45af-8985-e80f54babd63"). InnerVolumeSpecName "kube-api-access-24hhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.675802 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-kube-api-access-ccz7f" (OuterVolumeSpecName: "kube-api-access-ccz7f") pod "d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" (UID: "d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4"). InnerVolumeSpecName "kube-api-access-ccz7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.676910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51311b0a-7d74-4ee1-983c-3a48a521ded9-kube-api-access-mtn45" (OuterVolumeSpecName: "kube-api-access-mtn45") pod "51311b0a-7d74-4ee1-983c-3a48a521ded9" (UID: "51311b0a-7d74-4ee1-983c-3a48a521ded9"). InnerVolumeSpecName "kube-api-access-mtn45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.701719 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51311b0a-7d74-4ee1-983c-3a48a521ded9" (UID: "51311b0a-7d74-4ee1-983c-3a48a521ded9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.756086 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" (UID: "d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.765944 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.765986 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.766037 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtn45\" (UniqueName: \"kubernetes.io/projected/51311b0a-7d74-4ee1-983c-3a48a521ded9-kube-api-access-mtn45\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.766053 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccz7f\" (UniqueName: \"kubernetes.io/projected/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-kube-api-access-ccz7f\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.766065 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.766076 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.766105 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51311b0a-7d74-4ee1-983c-3a48a521ded9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.766117 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24hhc\" (UniqueName: \"kubernetes.io/projected/bb979100-3b5a-45af-8985-e80f54babd63-kube-api-access-24hhc\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.797338 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb979100-3b5a-45af-8985-e80f54babd63" (UID: "bb979100-3b5a-45af-8985-e80f54babd63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.867944 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb979100-3b5a-45af-8985-e80f54babd63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:24:58 crc kubenswrapper[4787]: I0219 19:24:58.913103 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gdqt5"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.001147 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nnvxf" event={"ID":"bb979100-3b5a-45af-8985-e80f54babd63","Type":"ContainerDied","Data":"65d6d4a4d86ee5f4a5bc9bcdf62a10b0a2a1f9acaad443d77434aa9f36065469"} Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.001231 4787 scope.go:117] "RemoveContainer" containerID="f0e5a808dccb6982462a5a08145cf1fe70f11282edb16629acc02fa06767efce" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.001279 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nnvxf" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.007895 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlcfq" event={"ID":"90aaa1d4-625b-4592-88b2-aad8f37a5dd8","Type":"ContainerDied","Data":"dfc5659c6af21cb52cbf625693b0eb393218c56a6917337f660af7a05024b30c"} Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.007967 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlcfq" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.011107 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z49wh" event={"ID":"d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4","Type":"ContainerDied","Data":"e2fd0094be9a55a9655a74d6e40117fbc06eb52d91497302a7913e54473ee62d"} Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.011149 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z49wh" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.016487 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4msqr" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.016569 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4msqr" event={"ID":"51311b0a-7d74-4ee1-983c-3a48a521ded9","Type":"ContainerDied","Data":"61741d98ef09f39ba547ef328c9be080e360d3a3fea6f6305df0c17ab9a7ede8"} Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.021997 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" event={"ID":"b0a66bdd-41eb-4f60-9b98-d4d1705347da","Type":"ContainerDied","Data":"4c247a8471958017d2f0b503ee41e5b0b0fe763842b1401a5151ac5e06a713b2"} Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.022112 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kpxw8" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.025689 4787 scope.go:117] "RemoveContainer" containerID="0a8e6eadd5be38a8a3b50e8253a2e1121f26f6df8bba459623929c1b43ae1b60" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.028168 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" event={"ID":"34c814cb-c6f7-48b1-8153-e532e5f71bc1","Type":"ContainerStarted","Data":"425b9eaea1d0e38e24531d006fd01d9bc85d81e4d7bca124318db382c5c0bffa"} Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.047799 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4msqr"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.056300 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4msqr"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.074400 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nnvxf"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.078060 4787 scope.go:117] "RemoveContainer" containerID="631311276c26f07db13412dc025987be0f53559edc593cf5dc9462979f34b296" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.081929 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nnvxf"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.085054 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z49wh"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.088747 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z49wh"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.102013 4787 scope.go:117] "RemoveContainer" containerID="8f170bfb021aafed08c8c5bddea7912d0f4ad75b828942b6671a9d25c31e6420" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.122869 4787 scope.go:117] "RemoveContainer" containerID="2a48cc61b0220e739063332495bbfe3cfd93b97bbc6810eec202d0cd92e496da" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.123075 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlcfq"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.134946 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlcfq"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.138569 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kpxw8"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.144382 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kpxw8"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.144714 4787 scope.go:117] "RemoveContainer" containerID="c0fdefe4d5ca3f61d437074a597218a71366533fccd3da1ebfd4e6959ec1cde5" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.159656 4787 scope.go:117] "RemoveContainer" containerID="156a9e4aba0cb1ef65dee8cb9213059c714014631e47fd3656dccd812e2ab802" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.173661 4787 scope.go:117] "RemoveContainer" containerID="d86b64d4498670113ea2df07f298abc714afc5da98125221efe85b1c58ab991b" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.197888 4787 scope.go:117] "RemoveContainer" containerID="98c18fb35f039b8c050fbe02c26c83b6513e20936caaeae995547ef660c63bf9" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.218906 4787 scope.go:117] "RemoveContainer" containerID="3e5bbf75ec506f0f45ff610e4a30398069cf420d0f30fb909211762cd07273ad" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.234469 4787 scope.go:117] "RemoveContainer" containerID="fae27ab3b6947994971c323ffbd3a7ec803bfbe78cd5e355e3edb1a9022bc15d" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.251150 4787 scope.go:117] "RemoveContainer" containerID="6b6a3c57bcf423aa4c6d59acc09d391a949a54b1b6020f2193c0692f7056b6d8" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.269407 4787 scope.go:117] "RemoveContainer" containerID="d8c6ea0e23dd19cb03ac68fb3e046c1b5b19dc28ca13f2c1f3d984647de38689" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918360 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6mcds"] Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918700 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918717 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918730 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918739 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918748 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918756 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918770 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918777 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918789 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918796 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918806 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918813 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918824 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918831 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918845 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918852 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918865 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918872 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="extract-content" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918882 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918889 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918904 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918915 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="extract-utilities" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918922 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918930 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918940 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918948 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" Feb 19 19:24:59 crc kubenswrapper[4787]: E0219 19:24:59.918959 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.918967 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.919084 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.919098 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.919108 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb979100-3b5a-45af-8985-e80f54babd63" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.919123 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.919131 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" containerName="registry-server" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.919145 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" containerName="marketplace-operator" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.920161 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.922972 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.935542 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mcds"] Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.986847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634b3e3d-f43d-4d5c-996c-02c5277282ef-catalog-content\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.987323 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65szh\" (UniqueName: \"kubernetes.io/projected/634b3e3d-f43d-4d5c-996c-02c5277282ef-kube-api-access-65szh\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:24:59 crc kubenswrapper[4787]: I0219 19:24:59.987467 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634b3e3d-f43d-4d5c-996c-02c5277282ef-utilities\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.036112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" event={"ID":"34c814cb-c6f7-48b1-8153-e532e5f71bc1","Type":"ContainerStarted","Data":"bce01626ad2717df6ad559e595999abb980eb239f1f6ca3eb205ba79a30673fa"} Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.036425 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.044763 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.072459 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" podStartSLOduration=3.072436193 podStartE2EDuration="3.072436193s" podCreationTimestamp="2026-02-19 19:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:25:00.068835686 +0000 UTC m=+367.859501628" watchObservedRunningTime="2026-02-19 19:25:00.072436193 +0000 UTC m=+367.863102135" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.089404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634b3e3d-f43d-4d5c-996c-02c5277282ef-utilities\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.089637 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634b3e3d-f43d-4d5c-996c-02c5277282ef-catalog-content\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.089698 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65szh\" (UniqueName: \"kubernetes.io/projected/634b3e3d-f43d-4d5c-996c-02c5277282ef-kube-api-access-65szh\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.091389 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634b3e3d-f43d-4d5c-996c-02c5277282ef-utilities\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.091896 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634b3e3d-f43d-4d5c-996c-02c5277282ef-catalog-content\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.120639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65szh\" (UniqueName: \"kubernetes.io/projected/634b3e3d-f43d-4d5c-996c-02c5277282ef-kube-api-access-65szh\") pod \"certified-operators-6mcds\" (UID: \"634b3e3d-f43d-4d5c-996c-02c5277282ef\") " pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.124539 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2gfvj"] Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.125968 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.131819 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.133995 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gfvj"] Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.191466 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/426998bc-15ae-476b-93e7-04f7591afce3-catalog-content\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.191526 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/426998bc-15ae-476b-93e7-04f7591afce3-utilities\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.191569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q488b\" (UniqueName: \"kubernetes.io/projected/426998bc-15ae-476b-93e7-04f7591afce3-kube-api-access-q488b\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.255588 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.293240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q488b\" (UniqueName: \"kubernetes.io/projected/426998bc-15ae-476b-93e7-04f7591afce3-kube-api-access-q488b\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.293383 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/426998bc-15ae-476b-93e7-04f7591afce3-catalog-content\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.293432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/426998bc-15ae-476b-93e7-04f7591afce3-utilities\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.294060 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/426998bc-15ae-476b-93e7-04f7591afce3-utilities\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.294183 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/426998bc-15ae-476b-93e7-04f7591afce3-catalog-content\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.317517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q488b\" (UniqueName: \"kubernetes.io/projected/426998bc-15ae-476b-93e7-04f7591afce3-kube-api-access-q488b\") pod \"redhat-marketplace-2gfvj\" (UID: \"426998bc-15ae-476b-93e7-04f7591afce3\") " pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.462730 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.709051 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mcds"] Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.869328 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gfvj"] Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.901636 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51311b0a-7d74-4ee1-983c-3a48a521ded9" path="/var/lib/kubelet/pods/51311b0a-7d74-4ee1-983c-3a48a521ded9/volumes" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.902817 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90aaa1d4-625b-4592-88b2-aad8f37a5dd8" path="/var/lib/kubelet/pods/90aaa1d4-625b-4592-88b2-aad8f37a5dd8/volumes" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.903786 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a66bdd-41eb-4f60-9b98-d4d1705347da" path="/var/lib/kubelet/pods/b0a66bdd-41eb-4f60-9b98-d4d1705347da/volumes" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.905326 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb979100-3b5a-45af-8985-e80f54babd63" path="/var/lib/kubelet/pods/bb979100-3b5a-45af-8985-e80f54babd63/volumes" Feb 19 19:25:00 crc kubenswrapper[4787]: I0219 19:25:00.906316 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4" path="/var/lib/kubelet/pods/d1439a73-21d8-4a7f-9e3e-7a5bc58cbbf4/volumes" Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.057345 4787 generic.go:334] "Generic (PLEG): container finished" podID="634b3e3d-f43d-4d5c-996c-02c5277282ef" containerID="eb450fe111b8ceab20ed906ee2217da959a32efe146adf0ad59b124a49473f72" exitCode=0 Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.057466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mcds" event={"ID":"634b3e3d-f43d-4d5c-996c-02c5277282ef","Type":"ContainerDied","Data":"eb450fe111b8ceab20ed906ee2217da959a32efe146adf0ad59b124a49473f72"} Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.057988 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mcds" event={"ID":"634b3e3d-f43d-4d5c-996c-02c5277282ef","Type":"ContainerStarted","Data":"5d18a9ba58970b0d7323b6b71492e0c01afa6413da27fa3d55c0bfdfcf37a18b"} Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.062139 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gfvj" event={"ID":"426998bc-15ae-476b-93e7-04f7591afce3","Type":"ContainerStarted","Data":"b033555ca99d1a43043470271310a0faf575d561fbb1227b3b22b66e6a7a4b7d"} Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.972702 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-79vrf"] Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.973687 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:01 crc kubenswrapper[4787]: I0219 19:25:01.993185 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-79vrf"] Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.071654 4787 generic.go:334] "Generic (PLEG): container finished" podID="426998bc-15ae-476b-93e7-04f7591afce3" containerID="b76952e912d85c2ebcbcb6f0b6d8bde9596640aa8c9df712303e5cc7be1eb305" exitCode=0 Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.071713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gfvj" event={"ID":"426998bc-15ae-476b-93e7-04f7591afce3","Type":"ContainerDied","Data":"b76952e912d85c2ebcbcb6f0b6d8bde9596640aa8c9df712303e5cc7be1eb305"} Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d1e59ca-5c78-4454-a99a-71fe888c607c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-bound-sa-token\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118450 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d1e59ca-5c78-4454-a99a-71fe888c607c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118532 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-registry-tls\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d1e59ca-5c78-4454-a99a-71fe888c607c-trusted-ca\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118692 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118740 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d1e59ca-5c78-4454-a99a-71fe888c607c-registry-certificates\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.118760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52gh\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-kube-api-access-f52gh\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.147311 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221115 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d1e59ca-5c78-4454-a99a-71fe888c607c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221206 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-bound-sa-token\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221262 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d1e59ca-5c78-4454-a99a-71fe888c607c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221315 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-registry-tls\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221336 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d1e59ca-5c78-4454-a99a-71fe888c607c-trusted-ca\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d1e59ca-5c78-4454-a99a-71fe888c607c-registry-certificates\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.221411 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52gh\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-kube-api-access-f52gh\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.226385 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d1e59ca-5c78-4454-a99a-71fe888c607c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.227267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d1e59ca-5c78-4454-a99a-71fe888c607c-trusted-ca\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.229661 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d1e59ca-5c78-4454-a99a-71fe888c607c-registry-certificates\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.232752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-registry-tls\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.235203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d1e59ca-5c78-4454-a99a-71fe888c607c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.248301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-bound-sa-token\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.250874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52gh\" (UniqueName: \"kubernetes.io/projected/8d1e59ca-5c78-4454-a99a-71fe888c607c-kube-api-access-f52gh\") pod \"image-registry-66df7c8f76-79vrf\" (UID: \"8d1e59ca-5c78-4454-a99a-71fe888c607c\") " pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.292550 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.311724 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jvnpx"] Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.313367 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.321782 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.332526 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvnpx"] Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.424438 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-utilities\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.424708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-kube-api-access-75gc5\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.424834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-catalog-content\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.515721 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pgqnv"] Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.523458 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.526105 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-utilities\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.526204 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-kube-api-access-75gc5\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.526718 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.527095 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-catalog-content\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.527252 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-utilities\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.528825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-catalog-content\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.540288 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgqnv"] Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.554001 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gc5\" (UniqueName: \"kubernetes.io/projected/1f861785-2aa2-4b3b-aca7-90a83d68bcd8-kube-api-access-75gc5\") pod \"community-operators-jvnpx\" (UID: \"1f861785-2aa2-4b3b-aca7-90a83d68bcd8\") " pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.628332 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b15ff10-c8f6-43ca-9538-e781e30d1842-catalog-content\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.628414 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw89p\" (UniqueName: \"kubernetes.io/projected/5b15ff10-c8f6-43ca-9538-e781e30d1842-kube-api-access-kw89p\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.628437 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b15ff10-c8f6-43ca-9538-e781e30d1842-utilities\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.650290 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.729829 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b15ff10-c8f6-43ca-9538-e781e30d1842-catalog-content\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.729916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw89p\" (UniqueName: \"kubernetes.io/projected/5b15ff10-c8f6-43ca-9538-e781e30d1842-kube-api-access-kw89p\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.729949 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b15ff10-c8f6-43ca-9538-e781e30d1842-utilities\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.730416 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b15ff10-c8f6-43ca-9538-e781e30d1842-catalog-content\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.730441 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b15ff10-c8f6-43ca-9538-e781e30d1842-utilities\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.760791 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw89p\" (UniqueName: \"kubernetes.io/projected/5b15ff10-c8f6-43ca-9538-e781e30d1842-kube-api-access-kw89p\") pod \"redhat-operators-pgqnv\" (UID: \"5b15ff10-c8f6-43ca-9538-e781e30d1842\") " pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.814219 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-79vrf"] Feb 19 19:25:02 crc kubenswrapper[4787]: I0219 19:25:02.842218 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.081452 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" event={"ID":"8d1e59ca-5c78-4454-a99a-71fe888c607c","Type":"ContainerStarted","Data":"c7ca32cbf419eb0c6c4c450297da3b3688533d03238680ab314a4b30f35a49d9"} Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.081520 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" event={"ID":"8d1e59ca-5c78-4454-a99a-71fe888c607c","Type":"ContainerStarted","Data":"57446a85e62f9dc92aea731027659059ed147f9d83f6c32837a2db633165d13b"} Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.082588 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.084394 4787 generic.go:334] "Generic (PLEG): container finished" podID="634b3e3d-f43d-4d5c-996c-02c5277282ef" containerID="08c69bf88f7d2db7b35b20cc2fefa46671c927110428dbc496fb3ccb1e250fc5" exitCode=0 Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.084454 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mcds" event={"ID":"634b3e3d-f43d-4d5c-996c-02c5277282ef","Type":"ContainerDied","Data":"08c69bf88f7d2db7b35b20cc2fefa46671c927110428dbc496fb3ccb1e250fc5"} Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.113288 4787 generic.go:334] "Generic (PLEG): container finished" podID="426998bc-15ae-476b-93e7-04f7591afce3" containerID="02bf13dffced051e9c5ff3bb401a0c2baf691f25c802872f1bf3417d91607850" exitCode=0 Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.113396 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gfvj" event={"ID":"426998bc-15ae-476b-93e7-04f7591afce3","Type":"ContainerDied","Data":"02bf13dffced051e9c5ff3bb401a0c2baf691f25c802872f1bf3417d91607850"} Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.148932 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvnpx"] Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.153290 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" podStartSLOduration=2.153262913 podStartE2EDuration="2.153262913s" podCreationTimestamp="2026-02-19 19:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:25:03.104791332 +0000 UTC m=+370.895457274" watchObservedRunningTime="2026-02-19 19:25:03.153262913 +0000 UTC m=+370.943928985" Feb 19 19:25:03 crc kubenswrapper[4787]: I0219 19:25:03.312675 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pgqnv"] Feb 19 19:25:03 crc kubenswrapper[4787]: W0219 19:25:03.321278 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b15ff10_c8f6_43ca_9538_e781e30d1842.slice/crio-30cc783a6f680868754d8fa0c1de941f30b1ef29d1727423f022e89c75d1d68a WatchSource:0}: Error finding container 30cc783a6f680868754d8fa0c1de941f30b1ef29d1727423f022e89c75d1d68a: Status 404 returned error can't find the container with id 30cc783a6f680868754d8fa0c1de941f30b1ef29d1727423f022e89c75d1d68a Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.120855 4787 generic.go:334] "Generic (PLEG): container finished" podID="5b15ff10-c8f6-43ca-9538-e781e30d1842" containerID="5f1d7148007e47645358b20fd7cc765e61bdb1241ee5d3ac09b81593f15cef53" exitCode=0 Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.121312 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgqnv" event={"ID":"5b15ff10-c8f6-43ca-9538-e781e30d1842","Type":"ContainerDied","Data":"5f1d7148007e47645358b20fd7cc765e61bdb1241ee5d3ac09b81593f15cef53"} Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.121348 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgqnv" event={"ID":"5b15ff10-c8f6-43ca-9538-e781e30d1842","Type":"ContainerStarted","Data":"30cc783a6f680868754d8fa0c1de941f30b1ef29d1727423f022e89c75d1d68a"} Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.129290 4787 generic.go:334] "Generic (PLEG): container finished" podID="1f861785-2aa2-4b3b-aca7-90a83d68bcd8" containerID="440e533f0cc8ea99dc59e373226baaf98cf7dad40558e0b4a70e6737357361a3" exitCode=0 Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.129379 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvnpx" event={"ID":"1f861785-2aa2-4b3b-aca7-90a83d68bcd8","Type":"ContainerDied","Data":"440e533f0cc8ea99dc59e373226baaf98cf7dad40558e0b4a70e6737357361a3"} Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.129423 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvnpx" event={"ID":"1f861785-2aa2-4b3b-aca7-90a83d68bcd8","Type":"ContainerStarted","Data":"7e1b4f91a44f37ae09b29bfa86c895bf9d8ec501add17beae5d7acd4743cfccc"} Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.138950 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mcds" event={"ID":"634b3e3d-f43d-4d5c-996c-02c5277282ef","Type":"ContainerStarted","Data":"252d9f09f876b73aea1a2edd7b30da30ed1767ccb20556aa0975c3ae3b8a5a98"} Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.144069 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gfvj" event={"ID":"426998bc-15ae-476b-93e7-04f7591afce3","Type":"ContainerStarted","Data":"7cd2af56ada9304a9e7061ca18466587aa341e236c253058c34175dc86203e8f"} Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.174174 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2gfvj" podStartSLOduration=2.649623607 podStartE2EDuration="4.17415321s" podCreationTimestamp="2026-02-19 19:25:00 +0000 UTC" firstStartedPulling="2026-02-19 19:25:02.073373558 +0000 UTC m=+369.864039500" lastFinishedPulling="2026-02-19 19:25:03.597903161 +0000 UTC m=+371.388569103" observedRunningTime="2026-02-19 19:25:04.171292825 +0000 UTC m=+371.961958767" watchObservedRunningTime="2026-02-19 19:25:04.17415321 +0000 UTC m=+371.964819152" Feb 19 19:25:04 crc kubenswrapper[4787]: I0219 19:25:04.212935 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6mcds" podStartSLOduration=2.688742945 podStartE2EDuration="5.21291051s" podCreationTimestamp="2026-02-19 19:24:59 +0000 UTC" firstStartedPulling="2026-02-19 19:25:01.059582652 +0000 UTC m=+368.850248594" lastFinishedPulling="2026-02-19 19:25:03.583750217 +0000 UTC m=+371.374416159" observedRunningTime="2026-02-19 19:25:04.210861199 +0000 UTC m=+372.001527141" watchObservedRunningTime="2026-02-19 19:25:04.21291051 +0000 UTC m=+372.003576452" Feb 19 19:25:05 crc kubenswrapper[4787]: I0219 19:25:05.150718 4787 generic.go:334] "Generic (PLEG): container finished" podID="1f861785-2aa2-4b3b-aca7-90a83d68bcd8" containerID="fa897ab95eebeb1b414bfdecdeb2e20a991dec241869f58d3b8bb641d431a7d3" exitCode=0 Feb 19 19:25:05 crc kubenswrapper[4787]: I0219 19:25:05.150795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvnpx" event={"ID":"1f861785-2aa2-4b3b-aca7-90a83d68bcd8","Type":"ContainerDied","Data":"fa897ab95eebeb1b414bfdecdeb2e20a991dec241869f58d3b8bb641d431a7d3"} Feb 19 19:25:05 crc kubenswrapper[4787]: I0219 19:25:05.157027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgqnv" event={"ID":"5b15ff10-c8f6-43ca-9538-e781e30d1842","Type":"ContainerStarted","Data":"24fcc4b0f06d6f581d3572762c61069f8b34a38e4616bba2113b49eedbc84366"} Feb 19 19:25:06 crc kubenswrapper[4787]: I0219 19:25:06.165981 4787 generic.go:334] "Generic (PLEG): container finished" podID="5b15ff10-c8f6-43ca-9538-e781e30d1842" containerID="24fcc4b0f06d6f581d3572762c61069f8b34a38e4616bba2113b49eedbc84366" exitCode=0 Feb 19 19:25:06 crc kubenswrapper[4787]: I0219 19:25:06.166078 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgqnv" event={"ID":"5b15ff10-c8f6-43ca-9538-e781e30d1842","Type":"ContainerDied","Data":"24fcc4b0f06d6f581d3572762c61069f8b34a38e4616bba2113b49eedbc84366"} Feb 19 19:25:06 crc kubenswrapper[4787]: I0219 19:25:06.169698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvnpx" event={"ID":"1f861785-2aa2-4b3b-aca7-90a83d68bcd8","Type":"ContainerStarted","Data":"c84d216eeca76dd91d0c829609b225b13796ad10973c1b4432604ecb9b97d08d"} Feb 19 19:25:06 crc kubenswrapper[4787]: I0219 19:25:06.222358 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jvnpx" podStartSLOduration=2.791591092 podStartE2EDuration="4.222336059s" podCreationTimestamp="2026-02-19 19:25:02 +0000 UTC" firstStartedPulling="2026-02-19 19:25:04.13139545 +0000 UTC m=+371.922061392" lastFinishedPulling="2026-02-19 19:25:05.562140417 +0000 UTC m=+373.352806359" observedRunningTime="2026-02-19 19:25:06.21733875 +0000 UTC m=+374.008004692" watchObservedRunningTime="2026-02-19 19:25:06.222336059 +0000 UTC m=+374.013002001" Feb 19 19:25:07 crc kubenswrapper[4787]: I0219 19:25:07.178624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pgqnv" event={"ID":"5b15ff10-c8f6-43ca-9538-e781e30d1842","Type":"ContainerStarted","Data":"91d3402976cbdb611ab1f99791ef79228d84c4a2430bacfeefc57179aa317598"} Feb 19 19:25:09 crc kubenswrapper[4787]: I0219 19:25:09.264239 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:25:09 crc kubenswrapper[4787]: I0219 19:25:09.264994 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.256178 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.257050 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.312726 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.333447 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pgqnv" podStartSLOduration=5.9139351940000005 podStartE2EDuration="8.333426637s" podCreationTimestamp="2026-02-19 19:25:02 +0000 UTC" firstStartedPulling="2026-02-19 19:25:04.124529305 +0000 UTC m=+371.915195247" lastFinishedPulling="2026-02-19 19:25:06.544020748 +0000 UTC m=+374.334686690" observedRunningTime="2026-02-19 19:25:07.204195558 +0000 UTC m=+374.994861500" watchObservedRunningTime="2026-02-19 19:25:10.333426637 +0000 UTC m=+378.124092579" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.476639 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.477477 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:10 crc kubenswrapper[4787]: I0219 19:25:10.520883 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:11 crc kubenswrapper[4787]: I0219 19:25:11.249549 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2gfvj" Feb 19 19:25:11 crc kubenswrapper[4787]: I0219 19:25:11.252140 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6mcds" Feb 19 19:25:12 crc kubenswrapper[4787]: I0219 19:25:12.650687 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:12 crc kubenswrapper[4787]: I0219 19:25:12.651129 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:12 crc kubenswrapper[4787]: I0219 19:25:12.714986 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:12 crc kubenswrapper[4787]: I0219 19:25:12.843935 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:12 crc kubenswrapper[4787]: I0219 19:25:12.844119 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:12 crc kubenswrapper[4787]: I0219 19:25:12.890270 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:13 crc kubenswrapper[4787]: I0219 19:25:13.257215 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jvnpx" Feb 19 19:25:13 crc kubenswrapper[4787]: I0219 19:25:13.258047 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pgqnv" Feb 19 19:25:22 crc kubenswrapper[4787]: I0219 19:25:22.303000 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" Feb 19 19:25:22 crc kubenswrapper[4787]: I0219 19:25:22.377260 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-js449"] Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.366356 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp"] Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.368198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.372295 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.372295 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.372800 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.373327 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.377101 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.383739 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp"] Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.460149 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.460238 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.460569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbp8\" (UniqueName: \"kubernetes.io/projected/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-kube-api-access-hpbp8\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.562493 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.562591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.562708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbp8\" (UniqueName: \"kubernetes.io/projected/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-kube-api-access-hpbp8\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.563966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.572235 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.588255 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbp8\" (UniqueName: \"kubernetes.io/projected/83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b-kube-api-access-hpbp8\") pod \"cluster-monitoring-operator-6d5b84845-g9mkp\" (UID: \"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:30 crc kubenswrapper[4787]: I0219 19:25:30.686503 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" Feb 19 19:25:31 crc kubenswrapper[4787]: I0219 19:25:31.099764 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp"] Feb 19 19:25:31 crc kubenswrapper[4787]: I0219 19:25:31.333504 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" event={"ID":"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b","Type":"ContainerStarted","Data":"72c7452787be07234bdc1e89e0da5e7df2173a56fb491796d14dab00f8f8a443"} Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.351003 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" event={"ID":"83c5c946-5bf8-4e5b-b4f9-f360ee1f3e6b","Type":"ContainerStarted","Data":"d44f3b6a70e1baa16a9aa3f1eea7433ff8b6c02a38e9bc99d8d96f7b1668d26e"} Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.378340 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-g9mkp" podStartSLOduration=1.734494298 podStartE2EDuration="4.378317738s" podCreationTimestamp="2026-02-19 19:25:30 +0000 UTC" firstStartedPulling="2026-02-19 19:25:31.112301277 +0000 UTC m=+398.902967219" lastFinishedPulling="2026-02-19 19:25:33.756124717 +0000 UTC m=+401.546790659" observedRunningTime="2026-02-19 19:25:34.376136682 +0000 UTC m=+402.166802624" watchObservedRunningTime="2026-02-19 19:25:34.378317738 +0000 UTC m=+402.168983680" Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.425711 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp"] Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.426706 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.429089 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-hgd5x" Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.436394 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp"] Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.437525 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.523476 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e4e7fab7-39a4-4134-93a4-11f57e017fa0-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-qfzrp\" (UID: \"e4e7fab7-39a4-4134-93a4-11f57e017fa0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:34 crc kubenswrapper[4787]: I0219 19:25:34.624821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e4e7fab7-39a4-4134-93a4-11f57e017fa0-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-qfzrp\" (UID: \"e4e7fab7-39a4-4134-93a4-11f57e017fa0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:34 crc kubenswrapper[4787]: E0219 19:25:34.625039 4787 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 19 19:25:34 crc kubenswrapper[4787]: E0219 19:25:34.625130 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4e7fab7-39a4-4134-93a4-11f57e017fa0-tls-certificates podName:e4e7fab7-39a4-4134-93a4-11f57e017fa0 nodeName:}" failed. No retries permitted until 2026-02-19 19:25:35.125109143 +0000 UTC m=+402.915775085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/e4e7fab7-39a4-4134-93a4-11f57e017fa0-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-qfzrp" (UID: "e4e7fab7-39a4-4134-93a4-11f57e017fa0") : secret "prometheus-operator-admission-webhook-tls" not found Feb 19 19:25:35 crc kubenswrapper[4787]: I0219 19:25:35.132318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e4e7fab7-39a4-4134-93a4-11f57e017fa0-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-qfzrp\" (UID: \"e4e7fab7-39a4-4134-93a4-11f57e017fa0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:35 crc kubenswrapper[4787]: I0219 19:25:35.139928 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e4e7fab7-39a4-4134-93a4-11f57e017fa0-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-qfzrp\" (UID: \"e4e7fab7-39a4-4134-93a4-11f57e017fa0\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:35 crc kubenswrapper[4787]: I0219 19:25:35.391252 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:35 crc kubenswrapper[4787]: I0219 19:25:35.838234 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp"] Feb 19 19:25:36 crc kubenswrapper[4787]: I0219 19:25:36.364460 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" event={"ID":"e4e7fab7-39a4-4134-93a4-11f57e017fa0","Type":"ContainerStarted","Data":"9ed440309a756967b296bea120baa0c665b09027cda4b405b170292af8f3ffd0"} Feb 19 19:25:37 crc kubenswrapper[4787]: I0219 19:25:37.372367 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" event={"ID":"e4e7fab7-39a4-4134-93a4-11f57e017fa0","Type":"ContainerStarted","Data":"61f2f4117ff55b133e4567d4e28102695e4be885405c337101cc9e8e0b912e59"} Feb 19 19:25:37 crc kubenswrapper[4787]: I0219 19:25:37.373014 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:37 crc kubenswrapper[4787]: I0219 19:25:37.381711 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 19:25:37 crc kubenswrapper[4787]: I0219 19:25:37.405932 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podStartSLOduration=2.317725667 podStartE2EDuration="3.405909283s" podCreationTimestamp="2026-02-19 19:25:34 +0000 UTC" firstStartedPulling="2026-02-19 19:25:35.84800783 +0000 UTC m=+403.638673772" lastFinishedPulling="2026-02-19 19:25:36.936191446 +0000 UTC m=+404.726857388" observedRunningTime="2026-02-19 19:25:37.400742138 +0000 UTC m=+405.191408080" watchObservedRunningTime="2026-02-19 19:25:37.405909283 +0000 UTC m=+405.196575225" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.507778 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4b27q"] Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.510571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.514985 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-2g8zk" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.515780 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.515828 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.516322 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.535425 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4b27q"] Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.610787 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/476c8314-21a3-4623-9f0a-698b38417d91-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.611163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/476c8314-21a3-4623-9f0a-698b38417d91-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.611239 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/476c8314-21a3-4623-9f0a-698b38417d91-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.611325 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pl79\" (UniqueName: \"kubernetes.io/projected/476c8314-21a3-4623-9f0a-698b38417d91-kube-api-access-6pl79\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.712559 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/476c8314-21a3-4623-9f0a-698b38417d91-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.712676 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pl79\" (UniqueName: \"kubernetes.io/projected/476c8314-21a3-4623-9f0a-698b38417d91-kube-api-access-6pl79\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.712739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/476c8314-21a3-4623-9f0a-698b38417d91-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.712774 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/476c8314-21a3-4623-9f0a-698b38417d91-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.713952 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/476c8314-21a3-4623-9f0a-698b38417d91-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.720225 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/476c8314-21a3-4623-9f0a-698b38417d91-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.721429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/476c8314-21a3-4623-9f0a-698b38417d91-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.730068 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pl79\" (UniqueName: \"kubernetes.io/projected/476c8314-21a3-4623-9f0a-698b38417d91-kube-api-access-6pl79\") pod \"prometheus-operator-db54df47d-4b27q\" (UID: \"476c8314-21a3-4623-9f0a-698b38417d91\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:38 crc kubenswrapper[4787]: I0219 19:25:38.839519 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.263787 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.264335 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.264411 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.265331 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"142a5c3ff149fad1ffea5f20dee87392581ffa09a68fc5862a058508f6c30cc2"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.265453 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://142a5c3ff149fad1ffea5f20dee87392581ffa09a68fc5862a058508f6c30cc2" gracePeriod=600 Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.286583 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4b27q"] Feb 19 19:25:39 crc kubenswrapper[4787]: I0219 19:25:39.389274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" event={"ID":"476c8314-21a3-4623-9f0a-698b38417d91","Type":"ContainerStarted","Data":"c7cc05e4a24a87ffc8eb9acc740faa8fb4977f57b87e5e6943a5d982aac16a35"} Feb 19 19:25:40 crc kubenswrapper[4787]: I0219 19:25:40.398521 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="142a5c3ff149fad1ffea5f20dee87392581ffa09a68fc5862a058508f6c30cc2" exitCode=0 Feb 19 19:25:40 crc kubenswrapper[4787]: I0219 19:25:40.398631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"142a5c3ff149fad1ffea5f20dee87392581ffa09a68fc5862a058508f6c30cc2"} Feb 19 19:25:40 crc kubenswrapper[4787]: I0219 19:25:40.399360 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"3c63beec0b5874f1d9e9f9dbb1f62ad403c495529a52460b8bf62f93c192ccf6"} Feb 19 19:25:40 crc kubenswrapper[4787]: I0219 19:25:40.399412 4787 scope.go:117] "RemoveContainer" containerID="4276346b9ca2fe966079006d219a537a812ea8e9ef2af2d1a610f70ab299c1d4" Feb 19 19:25:41 crc kubenswrapper[4787]: I0219 19:25:41.406831 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" event={"ID":"476c8314-21a3-4623-9f0a-698b38417d91","Type":"ContainerStarted","Data":"251f7fa8cdde39cdaadf9fce2fce031221baf01e783807f26cccb42648be3a21"} Feb 19 19:25:41 crc kubenswrapper[4787]: I0219 19:25:41.407278 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" event={"ID":"476c8314-21a3-4623-9f0a-698b38417d91","Type":"ContainerStarted","Data":"f27090d076e07b601aefbce4af9c65ff8bff2e4c349717221f1e32e4a64a02d7"} Feb 19 19:25:41 crc kubenswrapper[4787]: I0219 19:25:41.425316 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-4b27q" podStartSLOduration=2.064771884 podStartE2EDuration="3.42529283s" podCreationTimestamp="2026-02-19 19:25:38 +0000 UTC" firstStartedPulling="2026-02-19 19:25:39.295515832 +0000 UTC m=+407.086181794" lastFinishedPulling="2026-02-19 19:25:40.656036798 +0000 UTC m=+408.446702740" observedRunningTime="2026-02-19 19:25:41.423709122 +0000 UTC m=+409.214375064" watchObservedRunningTime="2026-02-19 19:25:41.42529283 +0000 UTC m=+409.215958772" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.878293 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv"] Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.880304 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.884114 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cdjk9" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.884428 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.884562 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.903117 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv"] Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.912442 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c"] Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.914000 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.919267 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-dwl8h" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.919290 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.919428 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.919683 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.966440 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c"] Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.975761 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-rzz2z"] Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.976934 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.979371 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.979418 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.980621 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-c8rnj" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995398 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzds\" (UniqueName: \"kubernetes.io/projected/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-api-access-7lzds\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995475 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/349cf938-f4cd-4c77-90d3-17247b7c0afd-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/557c24fe-3be6-4a90-9150-6a57d496a0d3-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995603 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/557c24fe-3be6-4a90-9150-6a57d496a0d3-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995734 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw77t\" (UniqueName: \"kubernetes.io/projected/557c24fe-3be6-4a90-9150-6a57d496a0d3-kube-api-access-kw77t\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/557c24fe-3be6-4a90-9150-6a57d496a0d3-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995852 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/349cf938-f4cd-4c77-90d3-17247b7c0afd-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:42 crc kubenswrapper[4787]: I0219 19:25:42.995879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097257 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-textfile\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097330 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphgf\" (UniqueName: \"kubernetes.io/projected/0de32950-d444-43a8-afa0-2b2ae27bfdd4-kube-api-access-pphgf\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097371 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/349cf938-f4cd-4c77-90d3-17247b7c0afd-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097465 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzds\" (UniqueName: \"kubernetes.io/projected/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-api-access-7lzds\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097493 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-sys\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097561 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-root\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097618 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-wtmp\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097654 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/349cf938-f4cd-4c77-90d3-17247b7c0afd-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097683 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097723 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/557c24fe-3be6-4a90-9150-6a57d496a0d3-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097748 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-tls\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097792 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/557c24fe-3be6-4a90-9150-6a57d496a0d3-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097845 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0de32950-d444-43a8-afa0-2b2ae27bfdd4-metrics-client-ca\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097873 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw77t\" (UniqueName: \"kubernetes.io/projected/557c24fe-3be6-4a90-9150-6a57d496a0d3-kube-api-access-kw77t\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/557c24fe-3be6-4a90-9150-6a57d496a0d3-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.097932 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.098062 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/349cf938-f4cd-4c77-90d3-17247b7c0afd-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: E0219 19:25:43.098125 4787 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 19 19:25:43 crc kubenswrapper[4787]: E0219 19:25:43.098243 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-tls podName:349cf938-f4cd-4c77-90d3-17247b7c0afd nodeName:}" failed. No retries permitted until 2026-02-19 19:25:43.598211314 +0000 UTC m=+411.388877256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-klr4c" (UID: "349cf938-f4cd-4c77-90d3-17247b7c0afd") : secret "kube-state-metrics-tls" not found Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.098913 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.099099 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/557c24fe-3be6-4a90-9150-6a57d496a0d3-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.100088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/349cf938-f4cd-4c77-90d3-17247b7c0afd-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.107218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/557c24fe-3be6-4a90-9150-6a57d496a0d3-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.107968 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.113270 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/557c24fe-3be6-4a90-9150-6a57d496a0d3-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.126867 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzds\" (UniqueName: \"kubernetes.io/projected/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-api-access-7lzds\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.127955 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw77t\" (UniqueName: \"kubernetes.io/projected/557c24fe-3be6-4a90-9150-6a57d496a0d3-kube-api-access-kw77t\") pod \"openshift-state-metrics-566fddb674-lmcpv\" (UID: \"557c24fe-3be6-4a90-9150-6a57d496a0d3\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.199496 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-textfile\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pphgf\" (UniqueName: \"kubernetes.io/projected/0de32950-d444-43a8-afa0-2b2ae27bfdd4-kube-api-access-pphgf\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200053 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-sys\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200079 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-root\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200099 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-wtmp\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200124 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200145 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-tls\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0de32950-d444-43a8-afa0-2b2ae27bfdd4-metrics-client-ca\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200216 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-textfile\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200626 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-sys\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200665 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-root\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.200857 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-wtmp\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: E0219 19:25:43.200901 4787 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 19 19:25:43 crc kubenswrapper[4787]: E0219 19:25:43.200964 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-tls podName:0de32950-d444-43a8-afa0-2b2ae27bfdd4 nodeName:}" failed. No retries permitted until 2026-02-19 19:25:43.700946068 +0000 UTC m=+411.491612020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-tls") pod "node-exporter-rzz2z" (UID: "0de32950-d444-43a8-afa0-2b2ae27bfdd4") : secret "node-exporter-tls" not found Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.201000 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.201424 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0de32950-d444-43a8-afa0-2b2ae27bfdd4-metrics-client-ca\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.206201 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.218160 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pphgf\" (UniqueName: \"kubernetes.io/projected/0de32950-d444-43a8-afa0-2b2ae27bfdd4-kube-api-access-pphgf\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.608380 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.615205 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/349cf938-f4cd-4c77-90d3-17247b7c0afd-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-klr4c\" (UID: \"349cf938-f4cd-4c77-90d3-17247b7c0afd\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.649851 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv"] Feb 19 19:25:43 crc kubenswrapper[4787]: W0219 19:25:43.655185 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557c24fe_3be6_4a90_9150_6a57d496a0d3.slice/crio-02525d5e2cb01457f1b20f9382efaf5756dd21ce8191304723c6cd5686c89115 WatchSource:0}: Error finding container 02525d5e2cb01457f1b20f9382efaf5756dd21ce8191304723c6cd5686c89115: Status 404 returned error can't find the container with id 02525d5e2cb01457f1b20f9382efaf5756dd21ce8191304723c6cd5686c89115 Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.709976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-tls\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.714899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/0de32950-d444-43a8-afa0-2b2ae27bfdd4-node-exporter-tls\") pod \"node-exporter-rzz2z\" (UID: \"0de32950-d444-43a8-afa0-2b2ae27bfdd4\") " pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.866862 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.905887 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rzz2z" Feb 19 19:25:43 crc kubenswrapper[4787]: I0219 19:25:43.992834 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.077390 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082195 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082428 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082554 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082624 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082855 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-tmzw8" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082568 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.082990 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.103033 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.111693 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.120412 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.218889 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.218939 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.218984 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219006 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzlb\" (UniqueName: \"kubernetes.io/projected/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-kube-api-access-thzlb\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219056 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-config-volume\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219072 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-config-out\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219095 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219131 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-web-config\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.219192 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320430 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320493 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzlb\" (UniqueName: \"kubernetes.io/projected/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-kube-api-access-thzlb\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-config-volume\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320544 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-config-out\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320636 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320654 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320686 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.320704 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-web-config\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.321069 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.321969 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.322627 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.328255 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-config-volume\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.328318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.329406 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.335292 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.336151 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-config-out\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.336514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.339314 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.341474 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-web-config\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.341840 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzlb\" (UniqueName: \"kubernetes.io/projected/6103ab03-f1e4-498a-a3a6-c7af15c77bcb-kube-api-access-thzlb\") pod \"alertmanager-main-0\" (UID: \"6103ab03-f1e4-498a-a3a6-c7af15c77bcb\") " pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.449919 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c"] Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.477647 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" event={"ID":"557c24fe-3be6-4a90-9150-6a57d496a0d3","Type":"ContainerStarted","Data":"c390016faa79dbb6704d9fa0fa2ad6a17b86c4b925def2ca25e80cd2709fb687"} Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.477702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" event={"ID":"557c24fe-3be6-4a90-9150-6a57d496a0d3","Type":"ContainerStarted","Data":"706a246ecf7733dc7a0fb5b7454ad4847b769c16c2073f7719751f712ba911d4"} Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.477713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" event={"ID":"557c24fe-3be6-4a90-9150-6a57d496a0d3","Type":"ContainerStarted","Data":"02525d5e2cb01457f1b20f9382efaf5756dd21ce8191304723c6cd5686c89115"} Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.478753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzz2z" event={"ID":"0de32950-d444-43a8-afa0-2b2ae27bfdd4","Type":"ContainerStarted","Data":"04b63bc7767fd52999a037b63c7d97a94f67fb23f2f153e70ee2aa220c7ddf8e"} Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.485867 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.864413 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-8467586bf9-4p78p"] Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.866964 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871238 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871303 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871493 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871554 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-22rlq" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871683 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871738 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-d70e3ae2e8p8l" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.871849 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.883785 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8467586bf9-4p78p"] Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.930675 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.930744 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-tls\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.930991 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/08529c3b-a268-4673-b175-8271ec28811d-metrics-client-ca\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.931066 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.931112 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-grpc-tls\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.931161 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.931189 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.931230 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mxf\" (UniqueName: \"kubernetes.io/projected/08529c3b-a268-4673-b175-8271ec28811d-kube-api-access-44mxf\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:44 crc kubenswrapper[4787]: I0219 19:25:44.948351 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.032719 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.032790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.032825 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mxf\" (UniqueName: \"kubernetes.io/projected/08529c3b-a268-4673-b175-8271ec28811d-kube-api-access-44mxf\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.032847 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.032903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-tls\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.033129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/08529c3b-a268-4673-b175-8271ec28811d-metrics-client-ca\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.033155 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.033182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-grpc-tls\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.034278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/08529c3b-a268-4673-b175-8271ec28811d-metrics-client-ca\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.039856 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.040281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-tls\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.040647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.041123 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.041501 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-grpc-tls\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.042090 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/08529c3b-a268-4673-b175-8271ec28811d-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.047874 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mxf\" (UniqueName: \"kubernetes.io/projected/08529c3b-a268-4673-b175-8271ec28811d-kube-api-access-44mxf\") pod \"thanos-querier-8467586bf9-4p78p\" (UID: \"08529c3b-a268-4673-b175-8271ec28811d\") " pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: W0219 19:25:45.059015 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6103ab03_f1e4_498a_a3a6_c7af15c77bcb.slice/crio-b7a0a4e22abd001f11b13c21ff860c159c176d9fc0cd51955fd85e59846fb744 WatchSource:0}: Error finding container b7a0a4e22abd001f11b13c21ff860c159c176d9fc0cd51955fd85e59846fb744: Status 404 returned error can't find the container with id b7a0a4e22abd001f11b13c21ff860c159c176d9fc0cd51955fd85e59846fb744 Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.196248 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.485523 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"b7a0a4e22abd001f11b13c21ff860c159c176d9fc0cd51955fd85e59846fb744"} Feb 19 19:25:45 crc kubenswrapper[4787]: I0219 19:25:45.486638 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" event={"ID":"349cf938-f4cd-4c77-90d3-17247b7c0afd","Type":"ContainerStarted","Data":"70ff2b78d8793bb857fb018f2cfbf6504f75f4bb9286b7312a8dfff907432782"} Feb 19 19:25:46 crc kubenswrapper[4787]: I0219 19:25:46.397385 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8467586bf9-4p78p"] Feb 19 19:25:46 crc kubenswrapper[4787]: W0219 19:25:46.426118 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08529c3b_a268_4673_b175_8271ec28811d.slice/crio-a683e1498a5b12bf6477b1714d252fc6ec2743453d7a93e9e0c1c46b10712264 WatchSource:0}: Error finding container a683e1498a5b12bf6477b1714d252fc6ec2743453d7a93e9e0c1c46b10712264: Status 404 returned error can't find the container with id a683e1498a5b12bf6477b1714d252fc6ec2743453d7a93e9e0c1c46b10712264 Feb 19 19:25:46 crc kubenswrapper[4787]: I0219 19:25:46.494223 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"a683e1498a5b12bf6477b1714d252fc6ec2743453d7a93e9e0c1c46b10712264"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.436574 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-js449" podUID="48098b79-2446-4f86-a42a-e6f12ab783d5" containerName="registry" containerID="cri-o://710a73bd79d037d03dec54a0ce7333c78e832bd8348950c079083c9ae5eb37dd" gracePeriod=30 Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.508976 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" event={"ID":"557c24fe-3be6-4a90-9150-6a57d496a0d3","Type":"ContainerStarted","Data":"f04b37a44673a5bd522ac8be5160170151790a8c82a979703291b1efced6f676"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.515162 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" event={"ID":"349cf938-f4cd-4c77-90d3-17247b7c0afd","Type":"ContainerStarted","Data":"e176a6eea7540abeec3c3065a5c7bbe86829a847184cdd4ad39092409fb8bcdd"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.515238 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" event={"ID":"349cf938-f4cd-4c77-90d3-17247b7c0afd","Type":"ContainerStarted","Data":"f67cc254e23da1a91ec583cc31a656e9b5e7d203a76829dd0e7c5e9ca95411b9"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.515249 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" event={"ID":"349cf938-f4cd-4c77-90d3-17247b7c0afd","Type":"ContainerStarted","Data":"6eaa31aa9933b36593565cb1ed160833e649ba0609b6574e0487fc856c549256"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.518581 4787 generic.go:334] "Generic (PLEG): container finished" podID="6103ab03-f1e4-498a-a3a6-c7af15c77bcb" containerID="afe784752dc79f11ccc2cc3a1c67de7140c33aa065a61ca79dc0e0e4cc4347ac" exitCode=0 Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.518647 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerDied","Data":"afe784752dc79f11ccc2cc3a1c67de7140c33aa065a61ca79dc0e0e4cc4347ac"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.520984 4787 generic.go:334] "Generic (PLEG): container finished" podID="0de32950-d444-43a8-afa0-2b2ae27bfdd4" containerID="832a0e73749a06e0fb18d66694c73faf1290d9885675c3ba21320bea1e094a37" exitCode=0 Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.521022 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzz2z" event={"ID":"0de32950-d444-43a8-afa0-2b2ae27bfdd4","Type":"ContainerDied","Data":"832a0e73749a06e0fb18d66694c73faf1290d9885675c3ba21320bea1e094a37"} Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.542465 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-lmcpv" podStartSLOduration=3.018731329 podStartE2EDuration="5.542437185s" podCreationTimestamp="2026-02-19 19:25:42 +0000 UTC" firstStartedPulling="2026-02-19 19:25:43.930718258 +0000 UTC m=+411.721384200" lastFinishedPulling="2026-02-19 19:25:46.454424104 +0000 UTC m=+414.245090056" observedRunningTime="2026-02-19 19:25:47.536675742 +0000 UTC m=+415.327341704" watchObservedRunningTime="2026-02-19 19:25:47.542437185 +0000 UTC m=+415.333103127" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.611950 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-klr4c" podStartSLOduration=3.6136412030000002 podStartE2EDuration="5.611919504s" podCreationTimestamp="2026-02-19 19:25:42 +0000 UTC" firstStartedPulling="2026-02-19 19:25:44.465583105 +0000 UTC m=+412.256249047" lastFinishedPulling="2026-02-19 19:25:46.463861406 +0000 UTC m=+414.254527348" observedRunningTime="2026-02-19 19:25:47.605657817 +0000 UTC m=+415.396323759" watchObservedRunningTime="2026-02-19 19:25:47.611919504 +0000 UTC m=+415.402585446" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.701634 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64cf654bdf-pwf7s"] Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.708140 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.720899 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64cf654bdf-pwf7s"] Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.794516 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-config\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.795089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-trusted-ca-bundle\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.795124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-service-ca\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.795149 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-oauth-config\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.795176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-oauth-serving-cert\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.795209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-serving-cert\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.795236 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzzf\" (UniqueName: \"kubernetes.io/projected/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-kube-api-access-mfzzf\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.919852 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-config\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.921271 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-trusted-ca-bundle\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.921329 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-service-ca\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.921406 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-oauth-config\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.921499 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-oauth-serving-cert\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.921621 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-serving-cert\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.921698 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzzf\" (UniqueName: \"kubernetes.io/projected/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-kube-api-access-mfzzf\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.922267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-config\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.927367 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-oauth-serving-cert\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.927865 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-trusted-ca-bundle\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.928668 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-service-ca\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.945019 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzzf\" (UniqueName: \"kubernetes.io/projected/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-kube-api-access-mfzzf\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.949396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-oauth-config\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:47 crc kubenswrapper[4787]: I0219 19:25:47.952417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-serving-cert\") pod \"console-64cf654bdf-pwf7s\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.025216 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.242260 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-56f6f44749-gt422"] Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.243562 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.247468 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.247523 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.247902 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.248052 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hhwjl" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.248121 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c1sbuh97341us" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.251230 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.255780 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-56f6f44749-gt422"] Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327440 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb464be-e241-48b6-8e55-47bea187dcb4-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327540 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mjv8\" (UniqueName: \"kubernetes.io/projected/2eb464be-e241-48b6-8e55-47bea187dcb4-kube-api-access-6mjv8\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327591 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-secret-metrics-client-certs\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2eb464be-e241-48b6-8e55-47bea187dcb4-metrics-server-audit-profiles\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-client-ca-bundle\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327719 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2eb464be-e241-48b6-8e55-47bea187dcb4-audit-log\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.327799 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-secret-metrics-server-tls\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.429749 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mjv8\" (UniqueName: \"kubernetes.io/projected/2eb464be-e241-48b6-8e55-47bea187dcb4-kube-api-access-6mjv8\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.429836 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-secret-metrics-client-certs\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.429866 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2eb464be-e241-48b6-8e55-47bea187dcb4-metrics-server-audit-profiles\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.429891 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-client-ca-bundle\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.429936 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2eb464be-e241-48b6-8e55-47bea187dcb4-audit-log\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.429985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-secret-metrics-server-tls\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.430029 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb464be-e241-48b6-8e55-47bea187dcb4-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.431494 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2eb464be-e241-48b6-8e55-47bea187dcb4-audit-log\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.432027 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eb464be-e241-48b6-8e55-47bea187dcb4-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.438363 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-secret-metrics-server-tls\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.438580 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-secret-metrics-client-certs\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.439519 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb464be-e241-48b6-8e55-47bea187dcb4-client-ca-bundle\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.439908 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2eb464be-e241-48b6-8e55-47bea187dcb4-metrics-server-audit-profiles\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.450536 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mjv8\" (UniqueName: \"kubernetes.io/projected/2eb464be-e241-48b6-8e55-47bea187dcb4-kube-api-access-6mjv8\") pod \"metrics-server-56f6f44749-gt422\" (UID: \"2eb464be-e241-48b6-8e55-47bea187dcb4\") " pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.529134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzz2z" event={"ID":"0de32950-d444-43a8-afa0-2b2ae27bfdd4","Type":"ContainerStarted","Data":"f0ea00c0c572b2d66626537c0db909992036fee3389c1aa49e5821be61e3bb79"} Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.529739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzz2z" event={"ID":"0de32950-d444-43a8-afa0-2b2ae27bfdd4","Type":"ContainerStarted","Data":"ef0f801ecf86e06712fdb1760b8ef86e749c2e29b45b564c511779ce07c7d8f3"} Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.532948 4787 generic.go:334] "Generic (PLEG): container finished" podID="48098b79-2446-4f86-a42a-e6f12ab783d5" containerID="710a73bd79d037d03dec54a0ce7333c78e832bd8348950c079083c9ae5eb37dd" exitCode=0 Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.533119 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-js449" event={"ID":"48098b79-2446-4f86-a42a-e6f12ab783d5","Type":"ContainerDied","Data":"710a73bd79d037d03dec54a0ce7333c78e832bd8348950c079083c9ae5eb37dd"} Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.560661 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-rzz2z" podStartSLOduration=4.105235463 podStartE2EDuration="6.560629596s" podCreationTimestamp="2026-02-19 19:25:42 +0000 UTC" firstStartedPulling="2026-02-19 19:25:43.927751769 +0000 UTC m=+411.718417711" lastFinishedPulling="2026-02-19 19:25:46.383145902 +0000 UTC m=+414.173811844" observedRunningTime="2026-02-19 19:25:48.553094081 +0000 UTC m=+416.343760033" watchObservedRunningTime="2026-02-19 19:25:48.560629596 +0000 UTC m=+416.351295538" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.566962 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.667820 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z"] Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.669084 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.671065 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.672533 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.676686 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z"] Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.734653 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9cc2e2fc-ef4a-429e-a313-00db077b7feb-monitoring-plugin-cert\") pod \"monitoring-plugin-ffdd67d56-c5b8z\" (UID: \"9cc2e2fc-ef4a-429e-a313-00db077b7feb\") " pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.836573 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9cc2e2fc-ef4a-429e-a313-00db077b7feb-monitoring-plugin-cert\") pod \"monitoring-plugin-ffdd67d56-c5b8z\" (UID: \"9cc2e2fc-ef4a-429e-a313-00db077b7feb\") " pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.865898 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9cc2e2fc-ef4a-429e-a313-00db077b7feb-monitoring-plugin-cert\") pod \"monitoring-plugin-ffdd67d56-c5b8z\" (UID: \"9cc2e2fc-ef4a-429e-a313-00db077b7feb\") " pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.880831 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.938305 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-trusted-ca\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939195 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48098b79-2446-4f86-a42a-e6f12ab783d5-installation-pull-secrets\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939281 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-bound-sa-token\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939365 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-tls\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939388 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-certificates\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939642 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939757 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48098b79-2446-4f86-a42a-e6f12ab783d5-ca-trust-extracted\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939855 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6xzh\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-kube-api-access-j6xzh\") pod \"48098b79-2446-4f86-a42a-e6f12ab783d5\" (UID: \"48098b79-2446-4f86-a42a-e6f12ab783d5\") " Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.939862 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.940258 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.940316 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.950904 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.951594 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48098b79-2446-4f86-a42a-e6f12ab783d5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.958399 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-kube-api-access-j6xzh" (OuterVolumeSpecName: "kube-api-access-j6xzh") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "kube-api-access-j6xzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.971417 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.977572 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.983236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48098b79-2446-4f86-a42a-e6f12ab783d5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "48098b79-2446-4f86-a42a-e6f12ab783d5" (UID: "48098b79-2446-4f86-a42a-e6f12ab783d5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:25:48 crc kubenswrapper[4787]: I0219 19:25:48.997252 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.042194 4787 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.042262 4787 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48098b79-2446-4f86-a42a-e6f12ab783d5-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.042279 4787 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48098b79-2446-4f86-a42a-e6f12ab783d5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.042303 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6xzh\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-kube-api-access-j6xzh\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.042316 4787 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48098b79-2446-4f86-a42a-e6f12ab783d5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.042331 4787 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48098b79-2446-4f86-a42a-e6f12ab783d5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:25:49 crc kubenswrapper[4787]: I0219 19:25:49.109969 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64cf654bdf-pwf7s"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.316558 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 19:25:50 crc kubenswrapper[4787]: E0219 19:25:49.317656 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48098b79-2446-4f86-a42a-e6f12ab783d5" containerName="registry" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.317685 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="48098b79-2446-4f86-a42a-e6f12ab783d5" containerName="registry" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.318148 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="48098b79-2446-4f86-a42a-e6f12ab783d5" containerName="registry" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.320471 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.324865 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.325214 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.325453 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.326467 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.326660 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.326831 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.326957 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.328500 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-j8xj6" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.329104 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-2n8015h72gbmj" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.329224 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.329365 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.330497 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.343651 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.349788 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.410059 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-56f6f44749-gt422"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.451900 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-config\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452248 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4m76\" (UniqueName: \"kubernetes.io/projected/1ed994a0-dc89-48d6-a734-c6880120eaa5-kube-api-access-g4m76\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452595 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452711 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452815 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.452862 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ed994a0-dc89-48d6-a734-c6880120eaa5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.453015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456364 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456490 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456752 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456828 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-web-config\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456901 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.456950 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ed994a0-dc89-48d6-a734-c6880120eaa5-config-out\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.545874 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"02886d7fd3b5a474bf99c2f64459780fda0fc4368788c00ff7926a328548f678"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.545933 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"d00a6b9d8f4bba373abff23a72dfeac547b34d26a124887f279f353177595f21"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.550318 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-js449" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.550327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-js449" event={"ID":"48098b79-2446-4f86-a42a-e6f12ab783d5","Type":"ContainerDied","Data":"818469df8677679046e2de087d8dfb56c23cf3e10c0b5a6edd72adac08039696"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.550440 4787 scope.go:117] "RemoveContainer" containerID="710a73bd79d037d03dec54a0ce7333c78e832bd8348950c079083c9ae5eb37dd" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559146 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" event={"ID":"2eb464be-e241-48b6-8e55-47bea187dcb4","Type":"ContainerStarted","Data":"b5ec43bb88d365643d9f8a7dfade7f30bca1465ca897a434ad493bd47fd03466"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559176 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ed994a0-dc89-48d6-a734-c6880120eaa5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559306 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559441 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559484 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559549 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559595 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-web-config\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559673 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.559738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ed994a0-dc89-48d6-a734-c6880120eaa5-config-out\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560164 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-config\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560203 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4m76\" (UniqueName: \"kubernetes.io/projected/1ed994a0-dc89-48d6-a734-c6880120eaa5-kube-api-access-g4m76\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560815 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560874 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.560980 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.561061 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.561319 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.564706 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.565822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.565954 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64cf654bdf-pwf7s" event={"ID":"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd","Type":"ContainerStarted","Data":"677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.565992 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64cf654bdf-pwf7s" event={"ID":"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd","Type":"ContainerStarted","Data":"3b45c67b4e0efe8784d7e7e8b46f7b78fe46c6983f238ebc41b5b300b699c816"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.567420 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.569889 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.576799 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ed994a0-dc89-48d6-a734-c6880120eaa5-config-out\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.577112 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.577158 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-web-config\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.578390 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.578958 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-config\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.579691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.580278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.581917 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ed994a0-dc89-48d6-a734-c6880120eaa5-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.583804 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.584883 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.591379 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64cf654bdf-pwf7s" podStartSLOduration=2.591354212 podStartE2EDuration="2.591354212s" podCreationTimestamp="2026-02-19 19:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:25:49.587628851 +0000 UTC m=+417.378294793" watchObservedRunningTime="2026-02-19 19:25:49.591354212 +0000 UTC m=+417.382020154" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.595950 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4m76\" (UniqueName: \"kubernetes.io/projected/1ed994a0-dc89-48d6-a734-c6880120eaa5-kube-api-access-g4m76\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.599445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ed994a0-dc89-48d6-a734-c6880120eaa5-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.610287 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1ed994a0-dc89-48d6-a734-c6880120eaa5-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1ed994a0-dc89-48d6-a734-c6880120eaa5\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.664909 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-js449"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.669075 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:49.671830 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-js449"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:50.378347 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:50.463231 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z"] Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:50.577881 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"0d091990306343fd4c43a710b70babb62c38b13ea184f4ddff7a4b4e0f8ba044"} Feb 19 19:25:50 crc kubenswrapper[4787]: I0219 19:25:50.901957 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48098b79-2446-4f86-a42a-e6f12ab783d5" path="/var/lib/kubelet/pods/48098b79-2446-4f86-a42a-e6f12ab783d5/volumes" Feb 19 19:25:51 crc kubenswrapper[4787]: I0219 19:25:51.587934 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"97dcdb3bce3d4793bdc298691b960de4bca50ab376afc86d42bc3e0ebbe2ae9f"} Feb 19 19:25:51 crc kubenswrapper[4787]: I0219 19:25:51.589518 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" event={"ID":"9cc2e2fc-ef4a-429e-a313-00db077b7feb","Type":"ContainerStarted","Data":"9dd804855de43c118ca148922c998825a8d0bdcfce3d6e30f50a2f762ffb1666"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.601063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"7bd113fcc873ef3e75a09d594dae2fb6eeb3d03d1b6ea880deaf049f8da1d637"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.601129 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"8691b475180ade8758738a2c9a08407aa9f3799724a17563e9ef205f452ca429"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.601142 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"0e0fe889c37fd4f5c4e46a5d87c8e942b75999d1b826e012de7e4132d89d93f7"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.605129 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"cdde4d2f9dccdc9338b2960a9ddde99f015e3c6346190cbf756178e38ee42d15"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.605192 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"4ce8c2e93ad5cd62cbfd68cb706119c5ea9dd847ea8755b45561db99ee32becd"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.605204 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" event={"ID":"08529c3b-a268-4673-b175-8271ec28811d","Type":"ContainerStarted","Data":"515a47e403d39eaf852994adb0958e076050ac5b6f1f0e72000ae97d77c8b5c8"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.605317 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.608562 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" event={"ID":"2eb464be-e241-48b6-8e55-47bea187dcb4","Type":"ContainerStarted","Data":"4fc688cbbc9b918179cab2fdd48598dad2b1ffc8c445cbabadd2662402ca3317"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.617217 4787 generic.go:334] "Generic (PLEG): container finished" podID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerID="564e9257ddf3ae2fbc66e9c426e5f6b1456717124509860fd75a600d0a880513" exitCode=0 Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.617305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerDied","Data":"564e9257ddf3ae2fbc66e9c426e5f6b1456717124509860fd75a600d0a880513"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.619510 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" event={"ID":"9cc2e2fc-ef4a-429e-a313-00db077b7feb","Type":"ContainerStarted","Data":"f5518f637b61acd4c45f6be5b3be1054b1fe3a46101b9c93524ca1b216e2b841"} Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.620150 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.630123 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.640460 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" podStartSLOduration=3.39559924 podStartE2EDuration="8.64042286s" podCreationTimestamp="2026-02-19 19:25:44 +0000 UTC" firstStartedPulling="2026-02-19 19:25:46.462547707 +0000 UTC m=+414.253213669" lastFinishedPulling="2026-02-19 19:25:51.707371347 +0000 UTC m=+419.498037289" observedRunningTime="2026-02-19 19:25:52.634769251 +0000 UTC m=+420.425435193" watchObservedRunningTime="2026-02-19 19:25:52.64042286 +0000 UTC m=+420.431088802" Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.663095 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" podStartSLOduration=2.481504322 podStartE2EDuration="4.663069808s" podCreationTimestamp="2026-02-19 19:25:48 +0000 UTC" firstStartedPulling="2026-02-19 19:25:49.444973552 +0000 UTC m=+417.235639494" lastFinishedPulling="2026-02-19 19:25:51.626539038 +0000 UTC m=+419.417204980" observedRunningTime="2026-02-19 19:25:52.657628355 +0000 UTC m=+420.448294307" watchObservedRunningTime="2026-02-19 19:25:52.663069808 +0000 UTC m=+420.453735750" Feb 19 19:25:52 crc kubenswrapper[4787]: I0219 19:25:52.696394 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" podStartSLOduration=3.042886362 podStartE2EDuration="4.696364135s" podCreationTimestamp="2026-02-19 19:25:48 +0000 UTC" firstStartedPulling="2026-02-19 19:25:50.717112562 +0000 UTC m=+418.507778504" lastFinishedPulling="2026-02-19 19:25:52.370590335 +0000 UTC m=+420.161256277" observedRunningTime="2026-02-19 19:25:52.674437418 +0000 UTC m=+420.465103360" watchObservedRunningTime="2026-02-19 19:25:52.696364135 +0000 UTC m=+420.487030077" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.087701 4787 scope.go:117] "RemoveContainer" containerID="da99cfe67e28eba3572dcd38ec3ae66cc279c66e616db042aa91422925873838" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.113351 4787 scope.go:117] "RemoveContainer" containerID="b071cafc32e98a1a710e21d718846b4d4faabdc83cc17255e87f5d17e3617db1" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.132140 4787 scope.go:117] "RemoveContainer" containerID="7c882dce17089aac4d1375a8002df500f23c44fcbf82914950fc474a876c7752" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.151245 4787 scope.go:117] "RemoveContainer" containerID="6d9f8b2f4524d95d8e2e21d7b3aa35b7666f8d485e0d629006e64f8f7717edde" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.168082 4787 scope.go:117] "RemoveContainer" containerID="012d7f9fd5ca333c72638f915ddc94b32e62d292422ad7a986da755c98d4b7f9" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.183445 4787 scope.go:117] "RemoveContainer" containerID="8c899ab9cea625020b30396890b61e29545d0c59cea84600dc6a41ff836620b4" Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.634701 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"2c6c7f5b38ccc998df7b7b78dfeebc5b96b29796a6b24316ee570adb92c2102b"} Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.634758 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"da5c1377953c1bcd7c84d81d45edf23ec97e360113f62dd7b077755c4b01c05a"} Feb 19 19:25:53 crc kubenswrapper[4787]: I0219 19:25:53.634772 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"6103ab03-f1e4-498a-a3a6-c7af15c77bcb","Type":"ContainerStarted","Data":"fc64ab6f6e0f6bc00443370664da70fc785359ca7c563b0c65387b1e1f1d8349"} Feb 19 19:25:55 crc kubenswrapper[4787]: I0219 19:25:55.206337 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" Feb 19 19:25:55 crc kubenswrapper[4787]: I0219 19:25:55.236668 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=5.678711749 podStartE2EDuration="12.236640446s" podCreationTimestamp="2026-02-19 19:25:43 +0000 UTC" firstStartedPulling="2026-02-19 19:25:45.061375905 +0000 UTC m=+412.852041847" lastFinishedPulling="2026-02-19 19:25:51.619304602 +0000 UTC m=+419.409970544" observedRunningTime="2026-02-19 19:25:53.670730483 +0000 UTC m=+421.461396455" watchObservedRunningTime="2026-02-19 19:25:55.236640446 +0000 UTC m=+423.027306388" Feb 19 19:25:56 crc kubenswrapper[4787]: I0219 19:25:56.659654 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"96c8de298f633393601e28136f114511f2a0198eeb6e9dfa9b1d1a903ebd99a5"} Feb 19 19:25:56 crc kubenswrapper[4787]: I0219 19:25:56.660178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"31c12d1bce5cf8cf478e56bc5dbd170de0a93e7ca5a1c5fa61011f773c4a6bf4"} Feb 19 19:25:56 crc kubenswrapper[4787]: I0219 19:25:56.660203 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"24d854152d99b290d014cf5a79d9e90e729fcaa94a71908050a4d91d9d34469a"} Feb 19 19:25:56 crc kubenswrapper[4787]: I0219 19:25:56.660216 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"69e1ca07b01e8b6bbd9006b099a0b7247185e9b42dc3318e92d91a8fb82965ce"} Feb 19 19:25:56 crc kubenswrapper[4787]: I0219 19:25:56.660230 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"1bc44a1ad72732703eb52392df9809cf59d2f3283b92d6cd5f89528421380704"} Feb 19 19:25:57 crc kubenswrapper[4787]: I0219 19:25:57.671030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1ed994a0-dc89-48d6-a734-c6880120eaa5","Type":"ContainerStarted","Data":"3a50174c3a6233820d0396077be6f2c379d968029040ad69e0908face410cdf2"} Feb 19 19:25:57 crc kubenswrapper[4787]: I0219 19:25:57.702690 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.538593376 podStartE2EDuration="8.702665366s" podCreationTimestamp="2026-02-19 19:25:49 +0000 UTC" firstStartedPulling="2026-02-19 19:25:52.619634398 +0000 UTC m=+420.410300340" lastFinishedPulling="2026-02-19 19:25:55.783706388 +0000 UTC m=+423.574372330" observedRunningTime="2026-02-19 19:25:57.701248793 +0000 UTC m=+425.491914745" watchObservedRunningTime="2026-02-19 19:25:57.702665366 +0000 UTC m=+425.493331328" Feb 19 19:25:58 crc kubenswrapper[4787]: I0219 19:25:58.026039 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:58 crc kubenswrapper[4787]: I0219 19:25:58.026094 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:58 crc kubenswrapper[4787]: I0219 19:25:58.035200 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:58 crc kubenswrapper[4787]: I0219 19:25:58.682625 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:25:58 crc kubenswrapper[4787]: I0219 19:25:58.752087 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h92w2"] Feb 19 19:25:59 crc kubenswrapper[4787]: I0219 19:25:59.669937 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:26:08 crc kubenswrapper[4787]: I0219 19:26:08.567261 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:26:08 crc kubenswrapper[4787]: I0219 19:26:08.567899 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:26:23 crc kubenswrapper[4787]: I0219 19:26:23.808002 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-h92w2" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" containerID="cri-o://295b80b8860738632d1f7adbd3618b72e89e1c1047b46887930a5bc2bba49245" gracePeriod=15 Feb 19 19:26:24 crc kubenswrapper[4787]: I0219 19:26:24.259062 4787 patch_prober.go:28] interesting pod/console-f9d7485db-h92w2 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 19:26:24 crc kubenswrapper[4787]: I0219 19:26:24.259635 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-h92w2" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 19:26:24 crc kubenswrapper[4787]: I0219 19:26:24.856783 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h92w2_70c356be-c7d4-479a-a357-4cfe97e5e9c9/console/0.log" Feb 19 19:26:24 crc kubenswrapper[4787]: I0219 19:26:24.856830 4787 generic.go:334] "Generic (PLEG): container finished" podID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerID="295b80b8860738632d1f7adbd3618b72e89e1c1047b46887930a5bc2bba49245" exitCode=2 Feb 19 19:26:24 crc kubenswrapper[4787]: I0219 19:26:24.856860 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h92w2" event={"ID":"70c356be-c7d4-479a-a357-4cfe97e5e9c9","Type":"ContainerDied","Data":"295b80b8860738632d1f7adbd3618b72e89e1c1047b46887930a5bc2bba49245"} Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.666205 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h92w2_70c356be-c7d4-479a-a357-4cfe97e5e9c9/console/0.log" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.666646 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.777804 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-oauth-config\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.777869 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-serving-cert\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.777907 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-trusted-ca-bundle\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.777943 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-oauth-serving-cert\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.777970 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-service-ca\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.778015 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-config\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.778106 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27kkh\" (UniqueName: \"kubernetes.io/projected/70c356be-c7d4-479a-a357-4cfe97e5e9c9-kube-api-access-27kkh\") pod \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\" (UID: \"70c356be-c7d4-479a-a357-4cfe97e5e9c9\") " Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779404 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779417 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-service-ca" (OuterVolumeSpecName: "service-ca") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-config" (OuterVolumeSpecName: "console-config") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779839 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779863 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779875 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.779973 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.786336 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c356be-c7d4-479a-a357-4cfe97e5e9c9-kube-api-access-27kkh" (OuterVolumeSpecName: "kube-api-access-27kkh") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "kube-api-access-27kkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.787782 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.791051 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "70c356be-c7d4-479a-a357-4cfe97e5e9c9" (UID: "70c356be-c7d4-479a-a357-4cfe97e5e9c9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.863299 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-h92w2_70c356be-c7d4-479a-a357-4cfe97e5e9c9/console/0.log" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.863369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h92w2" event={"ID":"70c356be-c7d4-479a-a357-4cfe97e5e9c9","Type":"ContainerDied","Data":"800e3ff5699332140f4d749cc3b3b55278be4c1679ebc1f2a672a189e027e00a"} Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.863422 4787 scope.go:117] "RemoveContainer" containerID="295b80b8860738632d1f7adbd3618b72e89e1c1047b46887930a5bc2bba49245" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.863550 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h92w2" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.882648 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.883429 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/70c356be-c7d4-479a-a357-4cfe97e5e9c9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.883636 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70c356be-c7d4-479a-a357-4cfe97e5e9c9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.883656 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27kkh\" (UniqueName: \"kubernetes.io/projected/70c356be-c7d4-479a-a357-4cfe97e5e9c9-kube-api-access-27kkh\") on node \"crc\" DevicePath \"\"" Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.899406 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-h92w2"] Feb 19 19:26:25 crc kubenswrapper[4787]: I0219 19:26:25.904261 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-h92w2"] Feb 19 19:26:26 crc kubenswrapper[4787]: I0219 19:26:26.900122 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" path="/var/lib/kubelet/pods/70c356be-c7d4-479a-a357-4cfe97e5e9c9/volumes" Feb 19 19:26:28 crc kubenswrapper[4787]: I0219 19:26:28.573167 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:26:28 crc kubenswrapper[4787]: I0219 19:26:28.576697 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" Feb 19 19:26:49 crc kubenswrapper[4787]: I0219 19:26:49.670064 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:26:49 crc kubenswrapper[4787]: I0219 19:26:49.703072 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:26:50 crc kubenswrapper[4787]: I0219 19:26:50.056120 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.760229 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76cdd75fc5-lmqvz"] Feb 19 19:27:34 crc kubenswrapper[4787]: E0219 19:27:34.761376 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.761418 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.761624 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c356be-c7d4-479a-a357-4cfe97e5e9c9" containerName="console" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.762188 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833591 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-oauth-config\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833670 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-trusted-ca-bundle\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833697 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-service-ca\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833722 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvdx\" (UniqueName: \"kubernetes.io/projected/41596dfd-7706-4a0d-a152-026620b1d1b4-kube-api-access-dgvdx\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-console-config\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833799 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-oauth-serving-cert\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833855 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-serving-cert\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.833931 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76cdd75fc5-lmqvz"] Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935516 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-serving-cert\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935620 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-trusted-ca-bundle\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935644 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-oauth-config\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935662 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-service-ca\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935685 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgvdx\" (UniqueName: \"kubernetes.io/projected/41596dfd-7706-4a0d-a152-026620b1d1b4-kube-api-access-dgvdx\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935714 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-console-config\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.935749 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-oauth-serving-cert\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.937727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-service-ca\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.937728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-trusted-ca-bundle\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.937824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-console-config\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.938301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-oauth-serving-cert\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.956637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-oauth-config\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.956752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-serving-cert\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:34 crc kubenswrapper[4787]: I0219 19:27:34.962057 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgvdx\" (UniqueName: \"kubernetes.io/projected/41596dfd-7706-4a0d-a152-026620b1d1b4-kube-api-access-dgvdx\") pod \"console-76cdd75fc5-lmqvz\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:35 crc kubenswrapper[4787]: I0219 19:27:35.083791 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:35 crc kubenswrapper[4787]: I0219 19:27:35.483447 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76cdd75fc5-lmqvz"] Feb 19 19:27:36 crc kubenswrapper[4787]: I0219 19:27:36.306513 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cdd75fc5-lmqvz" event={"ID":"41596dfd-7706-4a0d-a152-026620b1d1b4","Type":"ContainerStarted","Data":"7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757"} Feb 19 19:27:36 crc kubenswrapper[4787]: I0219 19:27:36.306864 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cdd75fc5-lmqvz" event={"ID":"41596dfd-7706-4a0d-a152-026620b1d1b4","Type":"ContainerStarted","Data":"ee05c2c1d070d51dfbee3719d11c1bdf301540c88db0b9b076379eff8f993935"} Feb 19 19:27:36 crc kubenswrapper[4787]: I0219 19:27:36.342432 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76cdd75fc5-lmqvz" podStartSLOduration=2.342411199 podStartE2EDuration="2.342411199s" podCreationTimestamp="2026-02-19 19:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:27:36.340881056 +0000 UTC m=+524.131547008" watchObservedRunningTime="2026-02-19 19:27:36.342411199 +0000 UTC m=+524.133077141" Feb 19 19:27:39 crc kubenswrapper[4787]: I0219 19:27:39.263101 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:27:39 crc kubenswrapper[4787]: I0219 19:27:39.263165 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:27:45 crc kubenswrapper[4787]: I0219 19:27:45.084312 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:45 crc kubenswrapper[4787]: I0219 19:27:45.084910 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:45 crc kubenswrapper[4787]: I0219 19:27:45.089798 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:45 crc kubenswrapper[4787]: I0219 19:27:45.364501 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:27:45 crc kubenswrapper[4787]: I0219 19:27:45.435282 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64cf654bdf-pwf7s"] Feb 19 19:28:09 crc kubenswrapper[4787]: I0219 19:28:09.263138 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:28:09 crc kubenswrapper[4787]: I0219 19:28:09.263843 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.486335 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64cf654bdf-pwf7s" podUID="924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" containerName="console" containerID="cri-o://677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa" gracePeriod=15 Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.840036 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64cf654bdf-pwf7s_924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd/console/0.log" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.840361 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939433 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-oauth-serving-cert\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939497 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzzf\" (UniqueName: \"kubernetes.io/projected/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-kube-api-access-mfzzf\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939537 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-trusted-ca-bundle\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939598 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-oauth-config\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939664 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-serving-cert\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939696 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-service-ca\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.939885 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-config\") pod \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\" (UID: \"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd\") " Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.940504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.941534 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.941683 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-config" (OuterVolumeSpecName: "console-config") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.941743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-service-ca" (OuterVolumeSpecName: "service-ca") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.946239 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-kube-api-access-mfzzf" (OuterVolumeSpecName: "kube-api-access-mfzzf") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "kube-api-access-mfzzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.946745 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:28:10 crc kubenswrapper[4787]: I0219 19:28:10.948979 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" (UID: "924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042766 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042812 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfzzf\" (UniqueName: \"kubernetes.io/projected/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-kube-api-access-mfzzf\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042825 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042834 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042843 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042852 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.042859 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.512482 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64cf654bdf-pwf7s_924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd/console/0.log" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.512534 4787 generic.go:334] "Generic (PLEG): container finished" podID="924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" containerID="677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa" exitCode=2 Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.512568 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64cf654bdf-pwf7s" event={"ID":"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd","Type":"ContainerDied","Data":"677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa"} Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.512596 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64cf654bdf-pwf7s" event={"ID":"924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd","Type":"ContainerDied","Data":"3b45c67b4e0efe8784d7e7e8b46f7b78fe46c6983f238ebc41b5b300b699c816"} Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.512626 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64cf654bdf-pwf7s" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.512641 4787 scope.go:117] "RemoveContainer" containerID="677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.529226 4787 scope.go:117] "RemoveContainer" containerID="677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa" Feb 19 19:28:11 crc kubenswrapper[4787]: E0219 19:28:11.529753 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa\": container with ID starting with 677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa not found: ID does not exist" containerID="677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.529789 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa"} err="failed to get container status \"677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa\": rpc error: code = NotFound desc = could not find container \"677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa\": container with ID starting with 677977f9c310cd47d662bf4bba1aeb2829dbd49ed782940e51c30b39cf1036aa not found: ID does not exist" Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.537456 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64cf654bdf-pwf7s"] Feb 19 19:28:11 crc kubenswrapper[4787]: I0219 19:28:11.542033 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64cf654bdf-pwf7s"] Feb 19 19:28:12 crc kubenswrapper[4787]: I0219 19:28:12.908038 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" path="/var/lib/kubelet/pods/924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd/volumes" Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.262925 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.263492 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.263535 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.264138 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c63beec0b5874f1d9e9f9dbb1f62ad403c495529a52460b8bf62f93c192ccf6"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.264201 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://3c63beec0b5874f1d9e9f9dbb1f62ad403c495529a52460b8bf62f93c192ccf6" gracePeriod=600 Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.679795 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="3c63beec0b5874f1d9e9f9dbb1f62ad403c495529a52460b8bf62f93c192ccf6" exitCode=0 Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.679864 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"3c63beec0b5874f1d9e9f9dbb1f62ad403c495529a52460b8bf62f93c192ccf6"} Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.680201 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"eddfeaf72585fc8755796a91f30a98dc405a75dee35e13b5751f5a4b560c364c"} Feb 19 19:28:39 crc kubenswrapper[4787]: I0219 19:28:39.680226 4787 scope.go:117] "RemoveContainer" containerID="142a5c3ff149fad1ffea5f20dee87392581ffa09a68fc5862a058508f6c30cc2" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.215330 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh"] Feb 19 19:29:51 crc kubenswrapper[4787]: E0219 19:29:51.216099 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" containerName="console" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.216112 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" containerName="console" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.216219 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="924b22ea-f49c-4fb0-89d2-2c9e8b7ab3dd" containerName="console" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.217051 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.219225 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.227547 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh"] Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.321108 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.321195 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.321234 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk6zn\" (UniqueName: \"kubernetes.io/projected/b8b41187-185a-475c-84c9-5d64f4343eac-kube-api-access-zk6zn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.423021 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.423134 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.423175 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk6zn\" (UniqueName: \"kubernetes.io/projected/b8b41187-185a-475c-84c9-5d64f4343eac-kube-api-access-zk6zn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.423688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.423731 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.444068 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk6zn\" (UniqueName: \"kubernetes.io/projected/b8b41187-185a-475c-84c9-5d64f4343eac-kube-api-access-zk6zn\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.537138 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:51 crc kubenswrapper[4787]: I0219 19:29:51.994476 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh"] Feb 19 19:29:52 crc kubenswrapper[4787]: W0219 19:29:52.000813 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8b41187_185a_475c_84c9_5d64f4343eac.slice/crio-b8f73715a697245d88e518fad7efd911ba6091db5ec13d5ea6b038902cb32772 WatchSource:0}: Error finding container b8f73715a697245d88e518fad7efd911ba6091db5ec13d5ea6b038902cb32772: Status 404 returned error can't find the container with id b8f73715a697245d88e518fad7efd911ba6091db5ec13d5ea6b038902cb32772 Feb 19 19:29:52 crc kubenswrapper[4787]: I0219 19:29:52.146222 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" event={"ID":"b8b41187-185a-475c-84c9-5d64f4343eac","Type":"ContainerStarted","Data":"b8f73715a697245d88e518fad7efd911ba6091db5ec13d5ea6b038902cb32772"} Feb 19 19:29:53 crc kubenswrapper[4787]: I0219 19:29:53.154401 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8b41187-185a-475c-84c9-5d64f4343eac" containerID="7447862b9dafeb2e127ea1be20b6fbbe21e2f773cdc47dee6eb8cf860b2b8553" exitCode=0 Feb 19 19:29:53 crc kubenswrapper[4787]: I0219 19:29:53.154462 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" event={"ID":"b8b41187-185a-475c-84c9-5d64f4343eac","Type":"ContainerDied","Data":"7447862b9dafeb2e127ea1be20b6fbbe21e2f773cdc47dee6eb8cf860b2b8553"} Feb 19 19:29:53 crc kubenswrapper[4787]: I0219 19:29:53.157469 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:29:55 crc kubenswrapper[4787]: I0219 19:29:55.168754 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8b41187-185a-475c-84c9-5d64f4343eac" containerID="7e91f8e33525a8abb0168e9539d3e06e4c1381723ccf21b4ce1fe781db7e7ac9" exitCode=0 Feb 19 19:29:55 crc kubenswrapper[4787]: I0219 19:29:55.168830 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" event={"ID":"b8b41187-185a-475c-84c9-5d64f4343eac","Type":"ContainerDied","Data":"7e91f8e33525a8abb0168e9539d3e06e4c1381723ccf21b4ce1fe781db7e7ac9"} Feb 19 19:29:56 crc kubenswrapper[4787]: I0219 19:29:56.180599 4787 generic.go:334] "Generic (PLEG): container finished" podID="b8b41187-185a-475c-84c9-5d64f4343eac" containerID="dcc4ac755a44a71d2c3e960bf399ab61d499fe4370c5f5893f6e9b93541e6357" exitCode=0 Feb 19 19:29:56 crc kubenswrapper[4787]: I0219 19:29:56.180784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" event={"ID":"b8b41187-185a-475c-84c9-5d64f4343eac","Type":"ContainerDied","Data":"dcc4ac755a44a71d2c3e960bf399ab61d499fe4370c5f5893f6e9b93541e6357"} Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.515639 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.622510 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk6zn\" (UniqueName: \"kubernetes.io/projected/b8b41187-185a-475c-84c9-5d64f4343eac-kube-api-access-zk6zn\") pod \"b8b41187-185a-475c-84c9-5d64f4343eac\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.622634 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-util\") pod \"b8b41187-185a-475c-84c9-5d64f4343eac\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.622769 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-bundle\") pod \"b8b41187-185a-475c-84c9-5d64f4343eac\" (UID: \"b8b41187-185a-475c-84c9-5d64f4343eac\") " Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.625639 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-bundle" (OuterVolumeSpecName: "bundle") pod "b8b41187-185a-475c-84c9-5d64f4343eac" (UID: "b8b41187-185a-475c-84c9-5d64f4343eac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.629801 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b41187-185a-475c-84c9-5d64f4343eac-kube-api-access-zk6zn" (OuterVolumeSpecName: "kube-api-access-zk6zn") pod "b8b41187-185a-475c-84c9-5d64f4343eac" (UID: "b8b41187-185a-475c-84c9-5d64f4343eac"). InnerVolumeSpecName "kube-api-access-zk6zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.645464 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-util" (OuterVolumeSpecName: "util") pod "b8b41187-185a-475c-84c9-5d64f4343eac" (UID: "b8b41187-185a-475c-84c9-5d64f4343eac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.724147 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-util\") on node \"crc\" DevicePath \"\"" Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.724216 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b8b41187-185a-475c-84c9-5d64f4343eac-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:29:57 crc kubenswrapper[4787]: I0219 19:29:57.724229 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk6zn\" (UniqueName: \"kubernetes.io/projected/b8b41187-185a-475c-84c9-5d64f4343eac-kube-api-access-zk6zn\") on node \"crc\" DevicePath \"\"" Feb 19 19:29:58 crc kubenswrapper[4787]: I0219 19:29:58.200973 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" event={"ID":"b8b41187-185a-475c-84c9-5d64f4343eac","Type":"ContainerDied","Data":"b8f73715a697245d88e518fad7efd911ba6091db5ec13d5ea6b038902cb32772"} Feb 19 19:29:58 crc kubenswrapper[4787]: I0219 19:29:58.201014 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f73715a697245d88e518fad7efd911ba6091db5ec13d5ea6b038902cb32772" Feb 19 19:29:58 crc kubenswrapper[4787]: I0219 19:29:58.201332 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.147824 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd"] Feb 19 19:30:00 crc kubenswrapper[4787]: E0219 19:30:00.148399 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="pull" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.148417 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="pull" Feb 19 19:30:00 crc kubenswrapper[4787]: E0219 19:30:00.148436 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="util" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.148445 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="util" Feb 19 19:30:00 crc kubenswrapper[4787]: E0219 19:30:00.148459 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="extract" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.148469 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="extract" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.148590 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b41187-185a-475c-84c9-5d64f4343eac" containerName="extract" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.149025 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.152144 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.152395 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.162673 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd"] Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.257884 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-config-volume\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.257956 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9ck\" (UniqueName: \"kubernetes.io/projected/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-kube-api-access-4c9ck\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.257998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-secret-volume\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.359016 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-secret-volume\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.359140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-config-volume\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.359181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c9ck\" (UniqueName: \"kubernetes.io/projected/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-kube-api-access-4c9ck\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.360443 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-config-volume\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.364105 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-secret-volume\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.373937 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c9ck\" (UniqueName: \"kubernetes.io/projected/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-kube-api-access-4c9ck\") pod \"collect-profiles-29525490-hxpjd\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.473368 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:00 crc kubenswrapper[4787]: I0219 19:30:00.682086 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd"] Feb 19 19:30:01 crc kubenswrapper[4787]: I0219 19:30:01.220203 4787 generic.go:334] "Generic (PLEG): container finished" podID="a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" containerID="462a87db35f071b2f556abca0715b4980d3467b9c8471be41769c92482f7eb91" exitCode=0 Feb 19 19:30:01 crc kubenswrapper[4787]: I0219 19:30:01.220295 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" event={"ID":"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95","Type":"ContainerDied","Data":"462a87db35f071b2f556abca0715b4980d3467b9c8471be41769c92482f7eb91"} Feb 19 19:30:01 crc kubenswrapper[4787]: I0219 19:30:01.220490 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" event={"ID":"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95","Type":"ContainerStarted","Data":"ec4716c4e7de45ca23f6b63f8924ecaac5d92f00444cfb196bd1a4a868de1707"} Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.408248 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5xjgd"] Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409020 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-controller" containerID="cri-o://791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409137 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409153 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="northd" containerID="cri-o://94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409175 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-node" containerID="cri-o://29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409262 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-acl-logging" containerID="cri-o://aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409281 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="sbdb" containerID="cri-o://3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.409291 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="nbdb" containerID="cri-o://4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.449946 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" containerID="cri-o://52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520" gracePeriod=30 Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.605518 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.696888 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-secret-volume\") pod \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.696951 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c9ck\" (UniqueName: \"kubernetes.io/projected/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-kube-api-access-4c9ck\") pod \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.697043 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-config-volume\") pod \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\" (UID: \"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95\") " Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.698668 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-config-volume" (OuterVolumeSpecName: "config-volume") pod "a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" (UID: "a62f9ef9-50b4-40d7-a4d6-41046d4fdf95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.702045 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" (UID: "a62f9ef9-50b4-40d7-a4d6-41046d4fdf95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.702250 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-kube-api-access-4c9ck" (OuterVolumeSpecName: "kube-api-access-4c9ck") pod "a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" (UID: "a62f9ef9-50b4-40d7-a4d6-41046d4fdf95"). InnerVolumeSpecName "kube-api-access-4c9ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.798942 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.799003 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c9ck\" (UniqueName: \"kubernetes.io/projected/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-kube-api-access-4c9ck\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:02 crc kubenswrapper[4787]: I0219 19:30:02.799014 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.231912 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/2.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.232236 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/1.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.232278 4787 generic.go:334] "Generic (PLEG): container finished" podID="f0706129-aa73-40ed-899f-02882ed5a4cc" containerID="ab6f912b26d7da8c204f3006c121135c14a78395a3837de5a8c6b3cba6c43a85" exitCode=2 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.232334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerDied","Data":"ab6f912b26d7da8c204f3006c121135c14a78395a3837de5a8c6b3cba6c43a85"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.232400 4787 scope.go:117] "RemoveContainer" containerID="f1a2a8391d8722e1286e25c88cfe51b58383961ac6960f6b8ea68a657f322fc1" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.233473 4787 scope.go:117] "RemoveContainer" containerID="ab6f912b26d7da8c204f3006c121135c14a78395a3837de5a8c6b3cba6c43a85" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.234254 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" event={"ID":"a62f9ef9-50b4-40d7-a4d6-41046d4fdf95","Type":"ContainerDied","Data":"ec4716c4e7de45ca23f6b63f8924ecaac5d92f00444cfb196bd1a4a868de1707"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.234313 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec4716c4e7de45ca23f6b63f8924ecaac5d92f00444cfb196bd1a4a868de1707" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.234281 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.234804 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qxzkq_openshift-multus(f0706129-aa73-40ed-899f-02882ed5a4cc)\"" pod="openshift-multus/multus-qxzkq" podUID="f0706129-aa73-40ed-899f-02882ed5a4cc" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.236364 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovnkube-controller/3.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.238866 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovn-acl-logging/0.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.239415 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovn-controller/0.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245127 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520" exitCode=0 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245159 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58" exitCode=0 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245169 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d" exitCode=0 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245180 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1" exitCode=0 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245188 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398" exitCode=143 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245196 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1" exitCode=143 Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245218 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245253 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245267 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245279 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245290 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.245300 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1"} Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.270125 4787 scope.go:117] "RemoveContainer" containerID="7b170abcc601e2b8f799dd7b3688b4d66b51ac62c96f6a3c923029fdccb32e98" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.663887 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovn-acl-logging/0.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.664887 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovn-controller/0.log" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.665463 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.812553 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-ovn\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.812877 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-systemd-units\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813020 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-slash\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813091 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-kubelet\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813152 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-netd\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813248 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-config\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813363 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-script-lib\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813444 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813505 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-node-log\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813561 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-systemd\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813647 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-var-lib-openvswitch\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813736 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-ovn-kubernetes\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813797 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mht\" (UniqueName: \"kubernetes.io/projected/4989ff60-0c48-4f78-bcf6-2d394ee929fd-kube-api-access-r4mht\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813858 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-env-overrides\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813947 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovn-node-metrics-cert\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814016 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-log-socket\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814085 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-openvswitch\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814142 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-bin\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814197 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-netns\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814274 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-etc-openvswitch\") pod \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\" (UID: \"4989ff60-0c48-4f78-bcf6-2d394ee929fd\") " Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.812718 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.812977 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813274 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-slash" (OuterVolumeSpecName: "host-slash") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813307 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813325 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813717 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813742 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813945 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.813983 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-node-log" (OuterVolumeSpecName: "node-log") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814456 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814653 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814670 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-log-socket" (OuterVolumeSpecName: "log-socket") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814683 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814696 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.814708 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.815366 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.842319 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4989ff60-0c48-4f78-bcf6-2d394ee929fd-kube-api-access-r4mht" (OuterVolumeSpecName: "kube-api-access-r4mht") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "kube-api-access-r4mht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.845086 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.850961 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lcfqc"] Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851287 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kubecfg-setup" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851312 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kubecfg-setup" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851323 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" containerName="collect-profiles" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851331 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" containerName="collect-profiles" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851342 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851349 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851363 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="nbdb" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851369 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="nbdb" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851377 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851385 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851399 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-acl-logging" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851407 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-acl-logging" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851417 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851424 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851435 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851442 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851451 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851458 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851471 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-node" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851478 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-node" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851489 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="northd" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851496 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="northd" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851507 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="sbdb" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851514 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="sbdb" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851662 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="sbdb" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851673 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" containerName="collect-profiles" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851684 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851693 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-acl-logging" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851700 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851708 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="nbdb" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851720 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851727 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovn-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851739 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851748 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="northd" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851761 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851774 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="kube-rbac-proxy-node" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851885 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851894 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: E0219 19:30:03.851905 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.851912 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.852054 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerName="ovnkube-controller" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.854389 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.855424 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4989ff60-0c48-4f78-bcf6-2d394ee929fd" (UID: "4989ff60-0c48-4f78-bcf6-2d394ee929fd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916022 4787 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916064 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mht\" (UniqueName: \"kubernetes.io/projected/4989ff60-0c48-4f78-bcf6-2d394ee929fd-kube-api-access-r4mht\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916076 4787 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916084 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916093 4787 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916104 4787 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916112 4787 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916120 4787 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916128 4787 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916136 4787 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916144 4787 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916152 4787 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916159 4787 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916168 4787 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916175 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916183 4787 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4989ff60-0c48-4f78-bcf6-2d394ee929fd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916190 4787 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916198 4787 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916207 4787 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:03 crc kubenswrapper[4787]: I0219 19:30:03.916215 4787 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4989ff60-0c48-4f78-bcf6-2d394ee929fd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017434 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-var-lib-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovn-node-metrics-cert\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017538 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-cni-bin\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017599 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-run-ovn-kubernetes\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017696 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-run-netns\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017715 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqht\" (UniqueName: \"kubernetes.io/projected/8e3c2627-afc7-4b1e-9294-c376368ffff7-kube-api-access-rwqht\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017742 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-slash\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-node-log\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-env-overrides\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017816 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-systemd-units\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017868 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017904 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovnkube-script-lib\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017927 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-log-socket\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.017951 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-systemd\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.018208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.018326 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-cni-netd\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.018440 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovnkube-config\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.018552 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-ovn\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.018649 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-etc-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.019740 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-kubelet\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121169 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-run-ovn-kubernetes\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-run-netns\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwqht\" (UniqueName: \"kubernetes.io/projected/8e3c2627-afc7-4b1e-9294-c376368ffff7-kube-api-access-rwqht\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121719 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-slash\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-node-log\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-env-overrides\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-systemd-units\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121884 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovnkube-script-lib\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-log-socket\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-systemd\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121961 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121982 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-cni-netd\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122010 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-ovn\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122028 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovnkube-config\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-etc-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122095 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-kubelet\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122130 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-var-lib-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122161 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovn-node-metrics-cert\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122195 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-cni-bin\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122261 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-cni-bin\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121247 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-run-ovn-kubernetes\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122312 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-slash\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.121352 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-run-netns\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122349 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-node-log\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122481 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-systemd\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122543 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-run-ovn\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122596 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-log-socket\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122682 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-systemd-units\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122738 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-kubelet\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122632 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-etc-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-var-lib-openvswitch\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-cni-netd\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.122704 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8e3c2627-afc7-4b1e-9294-c376368ffff7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.123455 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovnkube-config\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.123746 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovnkube-script-lib\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.124046 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8e3c2627-afc7-4b1e-9294-c376368ffff7-env-overrides\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.127299 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8e3c2627-afc7-4b1e-9294-c376368ffff7-ovn-node-metrics-cert\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.144658 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwqht\" (UniqueName: \"kubernetes.io/projected/8e3c2627-afc7-4b1e-9294-c376368ffff7-kube-api-access-rwqht\") pod \"ovnkube-node-lcfqc\" (UID: \"8e3c2627-afc7-4b1e-9294-c376368ffff7\") " pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.168511 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:04 crc kubenswrapper[4787]: W0219 19:30:04.200886 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e3c2627_afc7_4b1e_9294_c376368ffff7.slice/crio-67e43455645bf8557af4a4392112c692d4ebe1aa8832d1902ce0405fb16f22ec WatchSource:0}: Error finding container 67e43455645bf8557af4a4392112c692d4ebe1aa8832d1902ce0405fb16f22ec: Status 404 returned error can't find the container with id 67e43455645bf8557af4a4392112c692d4ebe1aa8832d1902ce0405fb16f22ec Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.253902 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/2.log" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.255487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"67e43455645bf8557af4a4392112c692d4ebe1aa8832d1902ce0405fb16f22ec"} Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.259336 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovn-acl-logging/0.log" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260027 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5xjgd_4989ff60-0c48-4f78-bcf6-2d394ee929fd/ovn-controller/0.log" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260420 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3" exitCode=0 Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260453 4787 generic.go:334] "Generic (PLEG): container finished" podID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" containerID="29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd" exitCode=0 Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260488 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3"} Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260525 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd"} Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260539 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" event={"ID":"4989ff60-0c48-4f78-bcf6-2d394ee929fd","Type":"ContainerDied","Data":"3af053205aac22ed6c5d8d0bddbb344c801f6380db7a4f0e55f52596bfa23fb0"} Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260539 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5xjgd" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.260557 4787 scope.go:117] "RemoveContainer" containerID="52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.288382 4787 scope.go:117] "RemoveContainer" containerID="3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.314743 4787 scope.go:117] "RemoveContainer" containerID="4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.329026 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5xjgd"] Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.348379 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5xjgd"] Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.349801 4787 scope.go:117] "RemoveContainer" containerID="94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.380944 4787 scope.go:117] "RemoveContainer" containerID="dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.445552 4787 scope.go:117] "RemoveContainer" containerID="29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.504758 4787 scope.go:117] "RemoveContainer" containerID="aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.524906 4787 scope.go:117] "RemoveContainer" containerID="791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.548308 4787 scope.go:117] "RemoveContainer" containerID="09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.576245 4787 scope.go:117] "RemoveContainer" containerID="52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.576787 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520\": container with ID starting with 52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520 not found: ID does not exist" containerID="52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.576816 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520"} err="failed to get container status \"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520\": rpc error: code = NotFound desc = could not find container \"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520\": container with ID starting with 52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.576838 4787 scope.go:117] "RemoveContainer" containerID="3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.578643 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\": container with ID starting with 3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58 not found: ID does not exist" containerID="3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.578668 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58"} err="failed to get container status \"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\": rpc error: code = NotFound desc = could not find container \"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\": container with ID starting with 3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.578683 4787 scope.go:117] "RemoveContainer" containerID="4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.581962 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\": container with ID starting with 4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d not found: ID does not exist" containerID="4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.582018 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d"} err="failed to get container status \"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\": rpc error: code = NotFound desc = could not find container \"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\": container with ID starting with 4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.582056 4787 scope.go:117] "RemoveContainer" containerID="94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.585899 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\": container with ID starting with 94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1 not found: ID does not exist" containerID="94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.585947 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1"} err="failed to get container status \"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\": rpc error: code = NotFound desc = could not find container \"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\": container with ID starting with 94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.585983 4787 scope.go:117] "RemoveContainer" containerID="dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.587774 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\": container with ID starting with dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3 not found: ID does not exist" containerID="dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.587796 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3"} err="failed to get container status \"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\": rpc error: code = NotFound desc = could not find container \"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\": container with ID starting with dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.587813 4787 scope.go:117] "RemoveContainer" containerID="29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.588228 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\": container with ID starting with 29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd not found: ID does not exist" containerID="29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.588256 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd"} err="failed to get container status \"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\": rpc error: code = NotFound desc = could not find container \"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\": container with ID starting with 29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.588273 4787 scope.go:117] "RemoveContainer" containerID="aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.588591 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\": container with ID starting with aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398 not found: ID does not exist" containerID="aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.588648 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398"} err="failed to get container status \"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\": rpc error: code = NotFound desc = could not find container \"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\": container with ID starting with aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.588686 4787 scope.go:117] "RemoveContainer" containerID="791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.589098 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\": container with ID starting with 791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1 not found: ID does not exist" containerID="791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.589142 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1"} err="failed to get container status \"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\": rpc error: code = NotFound desc = could not find container \"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\": container with ID starting with 791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.589170 4787 scope.go:117] "RemoveContainer" containerID="09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24" Feb 19 19:30:04 crc kubenswrapper[4787]: E0219 19:30:04.589460 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\": container with ID starting with 09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24 not found: ID does not exist" containerID="09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.589495 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24"} err="failed to get container status \"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\": rpc error: code = NotFound desc = could not find container \"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\": container with ID starting with 09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.589522 4787 scope.go:117] "RemoveContainer" containerID="52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.589767 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520"} err="failed to get container status \"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520\": rpc error: code = NotFound desc = could not find container \"52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520\": container with ID starting with 52bc9a5346081c0b3e32f54197e787a70480fcc40c84cfcce1928db7a3173520 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.589786 4787 scope.go:117] "RemoveContainer" containerID="3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590051 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58"} err="failed to get container status \"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\": rpc error: code = NotFound desc = could not find container \"3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58\": container with ID starting with 3d82168d9b3e765162b22d439f0df952a94efcc0278081541939a5b9eca6bd58 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590073 4787 scope.go:117] "RemoveContainer" containerID="4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590287 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d"} err="failed to get container status \"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\": rpc error: code = NotFound desc = could not find container \"4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d\": container with ID starting with 4f44b49615e42e4bf54839e620f041a6b763439e8a2a203f42ac4ef04575976d not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590307 4787 scope.go:117] "RemoveContainer" containerID="94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590521 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1"} err="failed to get container status \"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\": rpc error: code = NotFound desc = could not find container \"94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1\": container with ID starting with 94ae03a1b682d4d613afb2c5f1088a611462fd8cd2d19a58f4b25da713f7c4a1 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590543 4787 scope.go:117] "RemoveContainer" containerID="dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590961 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3"} err="failed to get container status \"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\": rpc error: code = NotFound desc = could not find container \"dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3\": container with ID starting with dd3691a6556064df7e6ef1d5217198b15251363569c6f8b4c81e34a30fc48eb3 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.590997 4787 scope.go:117] "RemoveContainer" containerID="29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.591223 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd"} err="failed to get container status \"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\": rpc error: code = NotFound desc = could not find container \"29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd\": container with ID starting with 29867c3d60c5e447ce08230740bf16924481e57e5640f14fb25935999db3eebd not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.591257 4787 scope.go:117] "RemoveContainer" containerID="aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.591441 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398"} err="failed to get container status \"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\": rpc error: code = NotFound desc = could not find container \"aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398\": container with ID starting with aee530dde71ce0f4d4fd7ad0d970fdbadbbf8633c6a3aaf1900e8e3863265398 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.591461 4787 scope.go:117] "RemoveContainer" containerID="791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.591712 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1"} err="failed to get container status \"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\": rpc error: code = NotFound desc = could not find container \"791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1\": container with ID starting with 791fec01f6936c1dd9be5d5183ca8c5137f2863a7ce6b5851d36c721765f3ad1 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.591731 4787 scope.go:117] "RemoveContainer" containerID="09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.592004 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24"} err="failed to get container status \"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\": rpc error: code = NotFound desc = could not find container \"09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24\": container with ID starting with 09619d974920141afce6a172311b9ef6eb6b96bc7766acb5398745cba8217c24 not found: ID does not exist" Feb 19 19:30:04 crc kubenswrapper[4787]: I0219 19:30:04.899674 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4989ff60-0c48-4f78-bcf6-2d394ee929fd" path="/var/lib/kubelet/pods/4989ff60-0c48-4f78-bcf6-2d394ee929fd/volumes" Feb 19 19:30:05 crc kubenswrapper[4787]: I0219 19:30:05.266404 4787 generic.go:334] "Generic (PLEG): container finished" podID="8e3c2627-afc7-4b1e-9294-c376368ffff7" containerID="a3a5443b7d49f3308813e2886543ec6f23238ca35e6a30ac56207b1a99f6ce37" exitCode=0 Feb 19 19:30:05 crc kubenswrapper[4787]: I0219 19:30:05.266513 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerDied","Data":"a3a5443b7d49f3308813e2886543ec6f23238ca35e6a30ac56207b1a99f6ce37"} Feb 19 19:30:06 crc kubenswrapper[4787]: I0219 19:30:06.276890 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"d99f15d2123d70786f22eecbde605485c807af65b88c0d33ce72c867627fe86d"} Feb 19 19:30:06 crc kubenswrapper[4787]: I0219 19:30:06.277513 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"45e4d319f4e9ceb9f948a1bb65f2b72a06c320405f46d50c6fdebab03b78aba7"} Feb 19 19:30:06 crc kubenswrapper[4787]: I0219 19:30:06.277531 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"0538592945c9e97e4e181ef260ae538bb78751744229baaf76dea278a24c5d68"} Feb 19 19:30:06 crc kubenswrapper[4787]: I0219 19:30:06.277546 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"a0cb2dfa78d459555adafa76f3ac93da40db94ef931601c46235711d20b6b554"} Feb 19 19:30:06 crc kubenswrapper[4787]: I0219 19:30:06.277561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"237ee09685ab0541666c1504744199750b57e23b1d47f6ac7b729f1ebb949e2c"} Feb 19 19:30:06 crc kubenswrapper[4787]: I0219 19:30:06.277576 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"aef6772e351c7c9dd23f3649ca4fe6f894e097c5b95c5f214e2cad1f88b1f502"} Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.814550 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw"] Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.815712 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.818190 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.818887 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-w5pch" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.821969 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.951972 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd"] Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.952719 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.954952 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.955120 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-49xjw" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.961425 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb"] Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.962372 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:08 crc kubenswrapper[4787]: I0219 19:30:08.987882 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xh2f\" (UniqueName: \"kubernetes.io/projected/b97e9051-e506-426c-9612-f504a878f9ed-kube-api-access-2xh2f\") pod \"obo-prometheus-operator-68bc856cb9-h2xrw\" (UID: \"b97e9051-e506-426c-9612-f504a878f9ed\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.089976 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41d8edf4-0b35-4651-a626-3a635bbf22a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb\" (UID: \"41d8edf4-0b35-4651-a626-3a635bbf22a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.090038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41d8edf4-0b35-4651-a626-3a635bbf22a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb\" (UID: \"41d8edf4-0b35-4651-a626-3a635bbf22a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.090075 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e7bb705-7e46-4eb6-93c1-a124f8ca77c8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd\" (UID: \"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.090108 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e7bb705-7e46-4eb6-93c1-a124f8ca77c8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd\" (UID: \"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.090166 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xh2f\" (UniqueName: \"kubernetes.io/projected/b97e9051-e506-426c-9612-f504a878f9ed-kube-api-access-2xh2f\") pod \"obo-prometheus-operator-68bc856cb9-h2xrw\" (UID: \"b97e9051-e506-426c-9612-f504a878f9ed\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.114316 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xh2f\" (UniqueName: \"kubernetes.io/projected/b97e9051-e506-426c-9612-f504a878f9ed-kube-api-access-2xh2f\") pod \"obo-prometheus-operator-68bc856cb9-h2xrw\" (UID: \"b97e9051-e506-426c-9612-f504a878f9ed\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.134109 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.135361 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9n572"] Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.137520 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.140711 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9pqhc" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.140900 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.185793 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(83a6e7c622c8bfa5ae11a4133de96bfe9531e33acb2928d43635524d14f1c416): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.185884 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(83a6e7c622c8bfa5ae11a4133de96bfe9531e33acb2928d43635524d14f1c416): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.185908 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(83a6e7c622c8bfa5ae11a4133de96bfe9531e33acb2928d43635524d14f1c416): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.185951 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators(b97e9051-e506-426c-9612-f504a878f9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators(b97e9051-e506-426c-9612-f504a878f9ed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(83a6e7c622c8bfa5ae11a4133de96bfe9531e33acb2928d43635524d14f1c416): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" podUID="b97e9051-e506-426c-9612-f504a878f9ed" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.192192 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41d8edf4-0b35-4651-a626-3a635bbf22a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb\" (UID: \"41d8edf4-0b35-4651-a626-3a635bbf22a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.192246 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41d8edf4-0b35-4651-a626-3a635bbf22a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb\" (UID: \"41d8edf4-0b35-4651-a626-3a635bbf22a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.192282 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e7bb705-7e46-4eb6-93c1-a124f8ca77c8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd\" (UID: \"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.192321 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e7bb705-7e46-4eb6-93c1-a124f8ca77c8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd\" (UID: \"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.196523 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e7bb705-7e46-4eb6-93c1-a124f8ca77c8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd\" (UID: \"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.197240 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e7bb705-7e46-4eb6-93c1-a124f8ca77c8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd\" (UID: \"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.199280 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/41d8edf4-0b35-4651-a626-3a635bbf22a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb\" (UID: \"41d8edf4-0b35-4651-a626-3a635bbf22a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.209269 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/41d8edf4-0b35-4651-a626-3a635bbf22a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb\" (UID: \"41d8edf4-0b35-4651-a626-3a635bbf22a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.271486 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.285117 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.293733 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4bl\" (UniqueName: \"kubernetes.io/projected/963c18fc-03cd-46a4-9130-3908e897870e-kube-api-access-5h4bl\") pod \"observability-operator-59bdc8b94-9n572\" (UID: \"963c18fc-03cd-46a4-9130-3908e897870e\") " pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.293789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/963c18fc-03cd-46a4-9130-3908e897870e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9n572\" (UID: \"963c18fc-03cd-46a4-9130-3908e897870e\") " pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.304577 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"9e0f40c32c159e2890a370e554668a5795212518d26a123b1bfcf40d2dbfaf98"} Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.315139 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(ac84bc33f8f7d61d7864f1ebefc9e1d3ca283644a6ed49c4d8888bd4038b9681): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.315213 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(ac84bc33f8f7d61d7864f1ebefc9e1d3ca283644a6ed49c4d8888bd4038b9681): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.315237 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(ac84bc33f8f7d61d7864f1ebefc9e1d3ca283644a6ed49c4d8888bd4038b9681): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.315287 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators(4e7bb705-7e46-4eb6-93c1-a124f8ca77c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators(4e7bb705-7e46-4eb6-93c1-a124f8ca77c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(ac84bc33f8f7d61d7864f1ebefc9e1d3ca283644a6ed49c4d8888bd4038b9681): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" podUID="4e7bb705-7e46-4eb6-93c1-a124f8ca77c8" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.330162 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(7bc7147ca918ea4a4c7a7059e1e88f55833093a72b12d73fed69376b15a549c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.330238 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(7bc7147ca918ea4a4c7a7059e1e88f55833093a72b12d73fed69376b15a549c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.330259 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(7bc7147ca918ea4a4c7a7059e1e88f55833093a72b12d73fed69376b15a549c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.330314 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators(41d8edf4-0b35-4651-a626-3a635bbf22a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators(41d8edf4-0b35-4651-a626-3a635bbf22a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(7bc7147ca918ea4a4c7a7059e1e88f55833093a72b12d73fed69376b15a549c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" podUID="41d8edf4-0b35-4651-a626-3a635bbf22a5" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.349182 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tfckq"] Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.350075 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.357995 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-9pcsd" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.395437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h4bl\" (UniqueName: \"kubernetes.io/projected/963c18fc-03cd-46a4-9130-3908e897870e-kube-api-access-5h4bl\") pod \"observability-operator-59bdc8b94-9n572\" (UID: \"963c18fc-03cd-46a4-9130-3908e897870e\") " pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.395513 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/963c18fc-03cd-46a4-9130-3908e897870e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9n572\" (UID: \"963c18fc-03cd-46a4-9130-3908e897870e\") " pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.410661 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/963c18fc-03cd-46a4-9130-3908e897870e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9n572\" (UID: \"963c18fc-03cd-46a4-9130-3908e897870e\") " pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.421585 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h4bl\" (UniqueName: \"kubernetes.io/projected/963c18fc-03cd-46a4-9130-3908e897870e-kube-api-access-5h4bl\") pod \"observability-operator-59bdc8b94-9n572\" (UID: \"963c18fc-03cd-46a4-9130-3908e897870e\") " pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.496837 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5709e38-dd1f-4a2a-ba8f-4da0055aaf57-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tfckq\" (UID: \"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57\") " pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.497136 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbhf\" (UniqueName: \"kubernetes.io/projected/a5709e38-dd1f-4a2a-ba8f-4da0055aaf57-kube-api-access-dkbhf\") pod \"perses-operator-5bf474d74f-tfckq\" (UID: \"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57\") " pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.532886 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.555029 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(6d259b4d0f62cdd544451a6084e785433a544a0a9f7257d8e03bc8b7a74c64b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.555126 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(6d259b4d0f62cdd544451a6084e785433a544a0a9f7257d8e03bc8b7a74c64b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.555154 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(6d259b4d0f62cdd544451a6084e785433a544a0a9f7257d8e03bc8b7a74c64b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.555231 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9n572_openshift-operators(963c18fc-03cd-46a4-9130-3908e897870e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9n572_openshift-operators(963c18fc-03cd-46a4-9130-3908e897870e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(6d259b4d0f62cdd544451a6084e785433a544a0a9f7257d8e03bc8b7a74c64b0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9n572" podUID="963c18fc-03cd-46a4-9130-3908e897870e" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.599132 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbhf\" (UniqueName: \"kubernetes.io/projected/a5709e38-dd1f-4a2a-ba8f-4da0055aaf57-kube-api-access-dkbhf\") pod \"perses-operator-5bf474d74f-tfckq\" (UID: \"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57\") " pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.599217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5709e38-dd1f-4a2a-ba8f-4da0055aaf57-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tfckq\" (UID: \"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57\") " pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.600432 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5709e38-dd1f-4a2a-ba8f-4da0055aaf57-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tfckq\" (UID: \"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57\") " pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.620415 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbhf\" (UniqueName: \"kubernetes.io/projected/a5709e38-dd1f-4a2a-ba8f-4da0055aaf57-kube-api-access-dkbhf\") pod \"perses-operator-5bf474d74f-tfckq\" (UID: \"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57\") " pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: I0219 19:30:09.675105 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.722794 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(3f0f161beb1132fd5c1f744cb2dd748da0ca74a0a8f6e662738b4a1ff28bd131): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.722879 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(3f0f161beb1132fd5c1f744cb2dd748da0ca74a0a8f6e662738b4a1ff28bd131): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.722904 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(3f0f161beb1132fd5c1f744cb2dd748da0ca74a0a8f6e662738b4a1ff28bd131): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:09 crc kubenswrapper[4787]: E0219 19:30:09.722963 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tfckq_openshift-operators(a5709e38-dd1f-4a2a-ba8f-4da0055aaf57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tfckq_openshift-operators(a5709e38-dd1f-4a2a-ba8f-4da0055aaf57)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(3f0f161beb1132fd5c1f744cb2dd748da0ca74a0a8f6e662738b4a1ff28bd131): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" podUID="a5709e38-dd1f-4a2a-ba8f-4da0055aaf57" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.320230 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" event={"ID":"8e3c2627-afc7-4b1e-9294-c376368ffff7","Type":"ContainerStarted","Data":"a042a71f86b61d376e271842208b3843c866c7ff126849d984da5ebc33676fc8"} Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.320762 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.320775 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.320782 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.352764 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.360956 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.362057 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" podStartSLOduration=8.362035847 podStartE2EDuration="8.362035847s" podCreationTimestamp="2026-02-19 19:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:30:11.354477142 +0000 UTC m=+679.145143084" watchObservedRunningTime="2026-02-19 19:30:11.362035847 +0000 UTC m=+679.152701789" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.737986 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9n572"] Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.738132 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.738688 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.771813 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(4aa1c5ffc6a0f6c0b9d11edc34711f698284b434ba20485794bbee6e1225b4ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.771877 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(4aa1c5ffc6a0f6c0b9d11edc34711f698284b434ba20485794bbee6e1225b4ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.771902 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(4aa1c5ffc6a0f6c0b9d11edc34711f698284b434ba20485794bbee6e1225b4ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.771975 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9n572_openshift-operators(963c18fc-03cd-46a4-9130-3908e897870e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9n572_openshift-operators(963c18fc-03cd-46a4-9130-3908e897870e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(4aa1c5ffc6a0f6c0b9d11edc34711f698284b434ba20485794bbee6e1225b4ec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9n572" podUID="963c18fc-03cd-46a4-9130-3908e897870e" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.810950 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd"] Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.811080 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.811566 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.814844 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw"] Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.814978 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.815437 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.828419 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb"] Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.828543 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.829035 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.831832 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tfckq"] Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.831947 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:11 crc kubenswrapper[4787]: I0219 19:30:11.832393 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.864994 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(fe4db458e6d756ea94e585680c2d80dc61e490c85b1a12a61f5f669bb3e53221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.865076 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(fe4db458e6d756ea94e585680c2d80dc61e490c85b1a12a61f5f669bb3e53221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.865106 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(fe4db458e6d756ea94e585680c2d80dc61e490c85b1a12a61f5f669bb3e53221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.865167 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators(4e7bb705-7e46-4eb6-93c1-a124f8ca77c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators(4e7bb705-7e46-4eb6-93c1-a124f8ca77c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(fe4db458e6d756ea94e585680c2d80dc61e490c85b1a12a61f5f669bb3e53221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" podUID="4e7bb705-7e46-4eb6-93c1-a124f8ca77c8" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.901638 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(f1dcafc2cbbf527a123e90abccac92a4836a9968f805e1ccac12f173a7ffd8ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.901699 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(f1dcafc2cbbf527a123e90abccac92a4836a9968f805e1ccac12f173a7ffd8ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.901720 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(f1dcafc2cbbf527a123e90abccac92a4836a9968f805e1ccac12f173a7ffd8ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.901769 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators(b97e9051-e506-426c-9612-f504a878f9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators(b97e9051-e506-426c-9612-f504a878f9ed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(f1dcafc2cbbf527a123e90abccac92a4836a9968f805e1ccac12f173a7ffd8ee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" podUID="b97e9051-e506-426c-9612-f504a878f9ed" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.936961 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(497b2c48d599c35743d8fa0670cfcdbf652af2d429cf17870fdcdac321b02021): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.937305 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(497b2c48d599c35743d8fa0670cfcdbf652af2d429cf17870fdcdac321b02021): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.937331 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(497b2c48d599c35743d8fa0670cfcdbf652af2d429cf17870fdcdac321b02021): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.937387 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tfckq_openshift-operators(a5709e38-dd1f-4a2a-ba8f-4da0055aaf57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tfckq_openshift-operators(a5709e38-dd1f-4a2a-ba8f-4da0055aaf57)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(497b2c48d599c35743d8fa0670cfcdbf652af2d429cf17870fdcdac321b02021): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" podUID="a5709e38-dd1f-4a2a-ba8f-4da0055aaf57" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.943223 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(134e479d36b5e85b1dd53abfa5aae3a9ed2ee51176f387e5583499d891707f98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.943275 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(134e479d36b5e85b1dd53abfa5aae3a9ed2ee51176f387e5583499d891707f98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.943297 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(134e479d36b5e85b1dd53abfa5aae3a9ed2ee51176f387e5583499d891707f98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:11 crc kubenswrapper[4787]: E0219 19:30:11.943337 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators(41d8edf4-0b35-4651-a626-3a635bbf22a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators(41d8edf4-0b35-4651-a626-3a635bbf22a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(134e479d36b5e85b1dd53abfa5aae3a9ed2ee51176f387e5583499d891707f98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" podUID="41d8edf4-0b35-4651-a626-3a635bbf22a5" Feb 19 19:30:16 crc kubenswrapper[4787]: I0219 19:30:16.891949 4787 scope.go:117] "RemoveContainer" containerID="ab6f912b26d7da8c204f3006c121135c14a78395a3837de5a8c6b3cba6c43a85" Feb 19 19:30:16 crc kubenswrapper[4787]: E0219 19:30:16.892561 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qxzkq_openshift-multus(f0706129-aa73-40ed-899f-02882ed5a4cc)\"" pod="openshift-multus/multus-qxzkq" podUID="f0706129-aa73-40ed-899f-02882ed5a4cc" Feb 19 19:30:23 crc kubenswrapper[4787]: I0219 19:30:23.890815 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:23 crc kubenswrapper[4787]: I0219 19:30:23.891800 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:23 crc kubenswrapper[4787]: E0219 19:30:23.919437 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(1fc773669cb4fd91f91be0cab24e31da2bee21bbf91f5cd0d668d2b735e5c688): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:23 crc kubenswrapper[4787]: E0219 19:30:23.919519 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(1fc773669cb4fd91f91be0cab24e31da2bee21bbf91f5cd0d668d2b735e5c688): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:23 crc kubenswrapper[4787]: E0219 19:30:23.919548 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(1fc773669cb4fd91f91be0cab24e31da2bee21bbf91f5cd0d668d2b735e5c688): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:23 crc kubenswrapper[4787]: E0219 19:30:23.919603 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9n572_openshift-operators(963c18fc-03cd-46a4-9130-3908e897870e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9n572_openshift-operators(963c18fc-03cd-46a4-9130-3908e897870e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9n572_openshift-operators_963c18fc-03cd-46a4-9130-3908e897870e_0(1fc773669cb4fd91f91be0cab24e31da2bee21bbf91f5cd0d668d2b735e5c688): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9n572" podUID="963c18fc-03cd-46a4-9130-3908e897870e" Feb 19 19:30:24 crc kubenswrapper[4787]: I0219 19:30:24.892962 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:24 crc kubenswrapper[4787]: I0219 19:30:24.894140 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:24 crc kubenswrapper[4787]: E0219 19:30:24.928704 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(26a256d77bf9f0411e2be9d0fe856557eba1c883a82ce6a7b60545d9836f33af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:24 crc kubenswrapper[4787]: E0219 19:30:24.928835 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(26a256d77bf9f0411e2be9d0fe856557eba1c883a82ce6a7b60545d9836f33af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:24 crc kubenswrapper[4787]: E0219 19:30:24.928909 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(26a256d77bf9f0411e2be9d0fe856557eba1c883a82ce6a7b60545d9836f33af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:24 crc kubenswrapper[4787]: E0219 19:30:24.929017 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators(4e7bb705-7e46-4eb6-93c1-a124f8ca77c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators(4e7bb705-7e46-4eb6-93c1-a124f8ca77c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_openshift-operators_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8_0(26a256d77bf9f0411e2be9d0fe856557eba1c883a82ce6a7b60545d9836f33af): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" podUID="4e7bb705-7e46-4eb6-93c1-a124f8ca77c8" Feb 19 19:30:25 crc kubenswrapper[4787]: I0219 19:30:25.891658 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:25 crc kubenswrapper[4787]: I0219 19:30:25.892530 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:25 crc kubenswrapper[4787]: E0219 19:30:25.929961 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(3ad2d8965e8bf8c5ba994b66ea1f0ff3ad221e6eb37b3961e74b44918759fcdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:25 crc kubenswrapper[4787]: E0219 19:30:25.930796 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(3ad2d8965e8bf8c5ba994b66ea1f0ff3ad221e6eb37b3961e74b44918759fcdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:25 crc kubenswrapper[4787]: E0219 19:30:25.930854 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(3ad2d8965e8bf8c5ba994b66ea1f0ff3ad221e6eb37b3961e74b44918759fcdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:25 crc kubenswrapper[4787]: E0219 19:30:25.930920 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators(41d8edf4-0b35-4651-a626-3a635bbf22a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators(41d8edf4-0b35-4651-a626-3a635bbf22a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_openshift-operators_41d8edf4-0b35-4651-a626-3a635bbf22a5_0(3ad2d8965e8bf8c5ba994b66ea1f0ff3ad221e6eb37b3961e74b44918759fcdb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" podUID="41d8edf4-0b35-4651-a626-3a635bbf22a5" Feb 19 19:30:26 crc kubenswrapper[4787]: I0219 19:30:26.891119 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:26 crc kubenswrapper[4787]: I0219 19:30:26.891137 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:26 crc kubenswrapper[4787]: I0219 19:30:26.891806 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:26 crc kubenswrapper[4787]: I0219 19:30:26.891807 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.945403 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(863b3ad401840c194e4f4051b836be87b3b640d58bedc325daaffbab7d590084): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.945464 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(863b3ad401840c194e4f4051b836be87b3b640d58bedc325daaffbab7d590084): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.945486 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(863b3ad401840c194e4f4051b836be87b3b640d58bedc325daaffbab7d590084): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.945536 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-tfckq_openshift-operators(a5709e38-dd1f-4a2a-ba8f-4da0055aaf57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-tfckq_openshift-operators(a5709e38-dd1f-4a2a-ba8f-4da0055aaf57)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-tfckq_openshift-operators_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57_0(863b3ad401840c194e4f4051b836be87b3b640d58bedc325daaffbab7d590084): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" podUID="a5709e38-dd1f-4a2a-ba8f-4da0055aaf57" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.954417 4787 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(ad107d03664348cf74eb37808182a8d8ed595f4c81a66f6e6a85c6abf812ee1a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.954477 4787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(ad107d03664348cf74eb37808182a8d8ed595f4c81a66f6e6a85c6abf812ee1a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.954499 4787 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(ad107d03664348cf74eb37808182a8d8ed595f4c81a66f6e6a85c6abf812ee1a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:26 crc kubenswrapper[4787]: E0219 19:30:26.954539 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators(b97e9051-e506-426c-9612-f504a878f9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators(b97e9051-e506-426c-9612-f504a878f9ed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-h2xrw_openshift-operators_b97e9051-e506-426c-9612-f504a878f9ed_0(ad107d03664348cf74eb37808182a8d8ed595f4c81a66f6e6a85c6abf812ee1a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" podUID="b97e9051-e506-426c-9612-f504a878f9ed" Feb 19 19:30:28 crc kubenswrapper[4787]: I0219 19:30:28.891761 4787 scope.go:117] "RemoveContainer" containerID="ab6f912b26d7da8c204f3006c121135c14a78395a3837de5a8c6b3cba6c43a85" Feb 19 19:30:29 crc kubenswrapper[4787]: I0219 19:30:29.418231 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qxzkq_f0706129-aa73-40ed-899f-02882ed5a4cc/kube-multus/2.log" Feb 19 19:30:29 crc kubenswrapper[4787]: I0219 19:30:29.418557 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qxzkq" event={"ID":"f0706129-aa73-40ed-899f-02882ed5a4cc","Type":"ContainerStarted","Data":"279c656f5d9f7d135cf139d5b05880cd62df76a50c5a725a360284291145489a"} Feb 19 19:30:34 crc kubenswrapper[4787]: I0219 19:30:34.194312 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lcfqc" Feb 19 19:30:35 crc kubenswrapper[4787]: I0219 19:30:35.891135 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:35 crc kubenswrapper[4787]: I0219 19:30:35.891890 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:36 crc kubenswrapper[4787]: I0219 19:30:36.314836 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9n572"] Feb 19 19:30:36 crc kubenswrapper[4787]: I0219 19:30:36.459214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9n572" event={"ID":"963c18fc-03cd-46a4-9130-3908e897870e","Type":"ContainerStarted","Data":"e3df4e9689c0a717bc4d8f7b3c626b579c82c27dffc5221ce7d2cd4b171be9ea"} Feb 19 19:30:36 crc kubenswrapper[4787]: I0219 19:30:36.891872 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:36 crc kubenswrapper[4787]: I0219 19:30:36.891931 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:36 crc kubenswrapper[4787]: I0219 19:30:36.892629 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" Feb 19 19:30:36 crc kubenswrapper[4787]: I0219 19:30:36.892743 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" Feb 19 19:30:37 crc kubenswrapper[4787]: I0219 19:30:37.192646 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb"] Feb 19 19:30:37 crc kubenswrapper[4787]: I0219 19:30:37.428042 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd"] Feb 19 19:30:37 crc kubenswrapper[4787]: W0219 19:30:37.428835 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e7bb705_7e46_4eb6_93c1_a124f8ca77c8.slice/crio-cc31c50a0bccd98760b71cb0704acc17080067eb910ee2361dce6fe037a0d269 WatchSource:0}: Error finding container cc31c50a0bccd98760b71cb0704acc17080067eb910ee2361dce6fe037a0d269: Status 404 returned error can't find the container with id cc31c50a0bccd98760b71cb0704acc17080067eb910ee2361dce6fe037a0d269 Feb 19 19:30:37 crc kubenswrapper[4787]: I0219 19:30:37.465540 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" event={"ID":"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8","Type":"ContainerStarted","Data":"cc31c50a0bccd98760b71cb0704acc17080067eb910ee2361dce6fe037a0d269"} Feb 19 19:30:37 crc kubenswrapper[4787]: I0219 19:30:37.466504 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" event={"ID":"41d8edf4-0b35-4651-a626-3a635bbf22a5","Type":"ContainerStarted","Data":"def944ede9107402f9b7bec90e147714cd10c0789877ff02fda8a782750ec061"} Feb 19 19:30:38 crc kubenswrapper[4787]: I0219 19:30:38.891576 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:38 crc kubenswrapper[4787]: I0219 19:30:38.892353 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" Feb 19 19:30:39 crc kubenswrapper[4787]: I0219 19:30:39.264655 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:30:39 crc kubenswrapper[4787]: I0219 19:30:39.264731 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:30:41 crc kubenswrapper[4787]: I0219 19:30:41.891401 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:41 crc kubenswrapper[4787]: I0219 19:30:41.892065 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.440697 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw"] Feb 19 19:30:43 crc kubenswrapper[4787]: W0219 19:30:43.452252 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97e9051_e506_426c_9612_f504a878f9ed.slice/crio-0dd478c4d85bf93c40ef31add581b3fd66b9f2d116cb953eef5c3b6d01c80849 WatchSource:0}: Error finding container 0dd478c4d85bf93c40ef31add581b3fd66b9f2d116cb953eef5c3b6d01c80849: Status 404 returned error can't find the container with id 0dd478c4d85bf93c40ef31add581b3fd66b9f2d116cb953eef5c3b6d01c80849 Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.484035 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tfckq"] Feb 19 19:30:43 crc kubenswrapper[4787]: W0219 19:30:43.492685 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5709e38_dd1f_4a2a_ba8f_4da0055aaf57.slice/crio-5dd62ff40d5d8e82f7162f0a1fc57f8ba948a1ddf58c28c36affc1b69b211f79 WatchSource:0}: Error finding container 5dd62ff40d5d8e82f7162f0a1fc57f8ba948a1ddf58c28c36affc1b69b211f79: Status 404 returned error can't find the container with id 5dd62ff40d5d8e82f7162f0a1fc57f8ba948a1ddf58c28c36affc1b69b211f79 Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.522181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" event={"ID":"b97e9051-e506-426c-9612-f504a878f9ed","Type":"ContainerStarted","Data":"0dd478c4d85bf93c40ef31add581b3fd66b9f2d116cb953eef5c3b6d01c80849"} Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.523838 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" event={"ID":"4e7bb705-7e46-4eb6-93c1-a124f8ca77c8","Type":"ContainerStarted","Data":"7ff453bb63fc7c64591ed3399643b68665e09e93ea5ce592b884e780cd3ca3ef"} Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.525854 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" event={"ID":"41d8edf4-0b35-4651-a626-3a635bbf22a5","Type":"ContainerStarted","Data":"b88bd94e9377effb12e58b349130f3c462d9fbced9c0ddd82975b6e57b5555e1"} Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.527441 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" event={"ID":"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57","Type":"ContainerStarted","Data":"5dd62ff40d5d8e82f7162f0a1fc57f8ba948a1ddf58c28c36affc1b69b211f79"} Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.529528 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9n572" event={"ID":"963c18fc-03cd-46a4-9130-3908e897870e","Type":"ContainerStarted","Data":"17245eac501a397ceaaae9b2ea35bc98f609ae6833450b649bab583cb9e3a5c9"} Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.529979 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.537004 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9n572" Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.557646 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd" podStartSLOduration=30.02382154 podStartE2EDuration="35.55759769s" podCreationTimestamp="2026-02-19 19:30:08 +0000 UTC" firstStartedPulling="2026-02-19 19:30:37.43157467 +0000 UTC m=+705.222240612" lastFinishedPulling="2026-02-19 19:30:42.96535082 +0000 UTC m=+710.756016762" observedRunningTime="2026-02-19 19:30:43.5456534 +0000 UTC m=+711.336319372" watchObservedRunningTime="2026-02-19 19:30:43.55759769 +0000 UTC m=+711.348263642" Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.824093 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9n572" podStartSLOduration=28.176345728 podStartE2EDuration="34.82407351s" podCreationTimestamp="2026-02-19 19:30:09 +0000 UTC" firstStartedPulling="2026-02-19 19:30:36.320980314 +0000 UTC m=+704.111646266" lastFinishedPulling="2026-02-19 19:30:42.968708106 +0000 UTC m=+710.759374048" observedRunningTime="2026-02-19 19:30:43.822251809 +0000 UTC m=+711.612917751" watchObservedRunningTime="2026-02-19 19:30:43.82407351 +0000 UTC m=+711.614739452" Feb 19 19:30:43 crc kubenswrapper[4787]: I0219 19:30:43.855477 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb" podStartSLOduration=30.071051635 podStartE2EDuration="35.855453904s" podCreationTimestamp="2026-02-19 19:30:08 +0000 UTC" firstStartedPulling="2026-02-19 19:30:37.209511975 +0000 UTC m=+705.000177907" lastFinishedPulling="2026-02-19 19:30:42.993914234 +0000 UTC m=+710.784580176" observedRunningTime="2026-02-19 19:30:43.851185783 +0000 UTC m=+711.641851725" watchObservedRunningTime="2026-02-19 19:30:43.855453904 +0000 UTC m=+711.646119856" Feb 19 19:30:46 crc kubenswrapper[4787]: I0219 19:30:46.549624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" event={"ID":"a5709e38-dd1f-4a2a-ba8f-4da0055aaf57","Type":"ContainerStarted","Data":"5ad756f0868a13b7a497615cea0df334fcd61e0b5f3789931b3bc0ab8f363893"} Feb 19 19:30:46 crc kubenswrapper[4787]: I0219 19:30:46.552012 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:30:46 crc kubenswrapper[4787]: I0219 19:30:46.570704 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" podStartSLOduration=34.705603256 podStartE2EDuration="37.570684925s" podCreationTimestamp="2026-02-19 19:30:09 +0000 UTC" firstStartedPulling="2026-02-19 19:30:43.495646985 +0000 UTC m=+711.286312957" lastFinishedPulling="2026-02-19 19:30:46.360728684 +0000 UTC m=+714.151394626" observedRunningTime="2026-02-19 19:30:46.564724143 +0000 UTC m=+714.355390095" watchObservedRunningTime="2026-02-19 19:30:46.570684925 +0000 UTC m=+714.361350867" Feb 19 19:30:47 crc kubenswrapper[4787]: I0219 19:30:47.557852 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" event={"ID":"b97e9051-e506-426c-9612-f504a878f9ed","Type":"ContainerStarted","Data":"602c069a9df9b4b19a2ba65af28dd9fe77aac3505b01ea1a53a7ffd4fd050917"} Feb 19 19:30:47 crc kubenswrapper[4787]: I0219 19:30:47.578008 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-h2xrw" podStartSLOduration=36.673441318 podStartE2EDuration="39.57798693s" podCreationTimestamp="2026-02-19 19:30:08 +0000 UTC" firstStartedPulling="2026-02-19 19:30:43.457219572 +0000 UTC m=+711.247885514" lastFinishedPulling="2026-02-19 19:30:46.361765184 +0000 UTC m=+714.152431126" observedRunningTime="2026-02-19 19:30:47.576815376 +0000 UTC m=+715.367481358" watchObservedRunningTime="2026-02-19 19:30:47.57798693 +0000 UTC m=+715.368652872" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.668454 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7"] Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.670191 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.678341 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cc784" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.678404 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.678672 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.697534 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-cktgf"] Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.698531 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cktgf" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.705297 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mh8g6" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.707926 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7"] Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.726103 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cktgf"] Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.737840 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-68jqv"] Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.738840 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.741730 4787 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fvlcd" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.749756 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mschj\" (UniqueName: \"kubernetes.io/projected/43388b8a-9947-4159-bb64-7dd8745c2e47-kube-api-access-mschj\") pod \"cert-manager-858654f9db-cktgf\" (UID: \"43388b8a-9947-4159-bb64-7dd8745c2e47\") " pod="cert-manager/cert-manager-858654f9db-cktgf" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.749865 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhs8\" (UniqueName: \"kubernetes.io/projected/99b1a5a1-71a5-4caa-a647-68f6e7d96b96-kube-api-access-tvhs8\") pod \"cert-manager-cainjector-cf98fcc89-xnxh7\" (UID: \"99b1a5a1-71a5-4caa-a647-68f6e7d96b96\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.756271 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-68jqv"] Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.851691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvhs8\" (UniqueName: \"kubernetes.io/projected/99b1a5a1-71a5-4caa-a647-68f6e7d96b96-kube-api-access-tvhs8\") pod \"cert-manager-cainjector-cf98fcc89-xnxh7\" (UID: \"99b1a5a1-71a5-4caa-a647-68f6e7d96b96\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.851790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mschj\" (UniqueName: \"kubernetes.io/projected/43388b8a-9947-4159-bb64-7dd8745c2e47-kube-api-access-mschj\") pod \"cert-manager-858654f9db-cktgf\" (UID: \"43388b8a-9947-4159-bb64-7dd8745c2e47\") " pod="cert-manager/cert-manager-858654f9db-cktgf" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.851852 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxgj\" (UniqueName: \"kubernetes.io/projected/f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2-kube-api-access-8fxgj\") pod \"cert-manager-webhook-687f57d79b-68jqv\" (UID: \"f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.874292 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvhs8\" (UniqueName: \"kubernetes.io/projected/99b1a5a1-71a5-4caa-a647-68f6e7d96b96-kube-api-access-tvhs8\") pod \"cert-manager-cainjector-cf98fcc89-xnxh7\" (UID: \"99b1a5a1-71a5-4caa-a647-68f6e7d96b96\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.877330 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mschj\" (UniqueName: \"kubernetes.io/projected/43388b8a-9947-4159-bb64-7dd8745c2e47-kube-api-access-mschj\") pod \"cert-manager-858654f9db-cktgf\" (UID: \"43388b8a-9947-4159-bb64-7dd8745c2e47\") " pod="cert-manager/cert-manager-858654f9db-cktgf" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.952412 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxgj\" (UniqueName: \"kubernetes.io/projected/f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2-kube-api-access-8fxgj\") pod \"cert-manager-webhook-687f57d79b-68jqv\" (UID: \"f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.969659 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxgj\" (UniqueName: \"kubernetes.io/projected/f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2-kube-api-access-8fxgj\") pod \"cert-manager-webhook-687f57d79b-68jqv\" (UID: \"f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2\") " pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:30:49 crc kubenswrapper[4787]: I0219 19:30:49.993312 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.026785 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-cktgf" Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.063867 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.230727 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7"] Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.507286 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-68jqv"] Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.514572 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-cktgf"] Feb 19 19:30:50 crc kubenswrapper[4787]: W0219 19:30:50.517049 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45abe44_787c_4b28_b7d1_e5b5b3e7d0e2.slice/crio-7caace805adc3fe95a2ada61e0bd1950fe73a1329043b14c0c198bb8b3631243 WatchSource:0}: Error finding container 7caace805adc3fe95a2ada61e0bd1950fe73a1329043b14c0c198bb8b3631243: Status 404 returned error can't find the container with id 7caace805adc3fe95a2ada61e0bd1950fe73a1329043b14c0c198bb8b3631243 Feb 19 19:30:50 crc kubenswrapper[4787]: W0219 19:30:50.522655 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43388b8a_9947_4159_bb64_7dd8745c2e47.slice/crio-b8c57b01ad45888756f43d8dea5eefc367b9eaf37096c2dc7ca71caa827f1482 WatchSource:0}: Error finding container b8c57b01ad45888756f43d8dea5eefc367b9eaf37096c2dc7ca71caa827f1482: Status 404 returned error can't find the container with id b8c57b01ad45888756f43d8dea5eefc367b9eaf37096c2dc7ca71caa827f1482 Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.577845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" event={"ID":"f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2","Type":"ContainerStarted","Data":"7caace805adc3fe95a2ada61e0bd1950fe73a1329043b14c0c198bb8b3631243"} Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.578947 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" event={"ID":"99b1a5a1-71a5-4caa-a647-68f6e7d96b96","Type":"ContainerStarted","Data":"4ed474fb9a180ccc78ba34e7da41302673b6e3b1ccf3cfd194b8971ba1151115"} Feb 19 19:30:50 crc kubenswrapper[4787]: I0219 19:30:50.580702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cktgf" event={"ID":"43388b8a-9947-4159-bb64-7dd8745c2e47","Type":"ContainerStarted","Data":"b8c57b01ad45888756f43d8dea5eefc367b9eaf37096c2dc7ca71caa827f1482"} Feb 19 19:30:52 crc kubenswrapper[4787]: I0219 19:30:52.605572 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" event={"ID":"99b1a5a1-71a5-4caa-a647-68f6e7d96b96","Type":"ContainerStarted","Data":"a7e8df88f106b235a7fd754a842cc602325e90b1dbedf6224f0c51b187b2cef7"} Feb 19 19:30:52 crc kubenswrapper[4787]: I0219 19:30:52.626995 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xnxh7" podStartSLOduration=1.6722041220000001 podStartE2EDuration="3.626975409s" podCreationTimestamp="2026-02-19 19:30:49 +0000 UTC" firstStartedPulling="2026-02-19 19:30:50.253809703 +0000 UTC m=+718.044475645" lastFinishedPulling="2026-02-19 19:30:52.20858099 +0000 UTC m=+719.999246932" observedRunningTime="2026-02-19 19:30:52.618533434 +0000 UTC m=+720.409199386" watchObservedRunningTime="2026-02-19 19:30:52.626975409 +0000 UTC m=+720.417641351" Feb 19 19:30:54 crc kubenswrapper[4787]: I0219 19:30:54.620228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" event={"ID":"f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2","Type":"ContainerStarted","Data":"57e7cbbf0147de6f0531125a78c003b7e3d64988140e8a208d8fec6de6d84ab6"} Feb 19 19:30:54 crc kubenswrapper[4787]: I0219 19:30:54.620690 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:30:54 crc kubenswrapper[4787]: I0219 19:30:54.621784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-cktgf" event={"ID":"43388b8a-9947-4159-bb64-7dd8745c2e47","Type":"ContainerStarted","Data":"ed38bc5a81755c4344f33a7c9b412096a672e9be44c1e5339120f54aad8cc9d5"} Feb 19 19:30:54 crc kubenswrapper[4787]: I0219 19:30:54.639108 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" podStartSLOduration=1.9361965780000001 podStartE2EDuration="5.639090847s" podCreationTimestamp="2026-02-19 19:30:49 +0000 UTC" firstStartedPulling="2026-02-19 19:30:50.519409836 +0000 UTC m=+718.310075778" lastFinishedPulling="2026-02-19 19:30:54.222304105 +0000 UTC m=+722.012970047" observedRunningTime="2026-02-19 19:30:54.635300957 +0000 UTC m=+722.425966899" watchObservedRunningTime="2026-02-19 19:30:54.639090847 +0000 UTC m=+722.429756789" Feb 19 19:30:54 crc kubenswrapper[4787]: I0219 19:30:54.653839 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-cktgf" podStartSLOduration=1.919263388 podStartE2EDuration="5.653820744s" podCreationTimestamp="2026-02-19 19:30:49 +0000 UTC" firstStartedPulling="2026-02-19 19:30:50.525966456 +0000 UTC m=+718.316632388" lastFinishedPulling="2026-02-19 19:30:54.260523812 +0000 UTC m=+722.051189744" observedRunningTime="2026-02-19 19:30:54.649454337 +0000 UTC m=+722.440120279" watchObservedRunningTime="2026-02-19 19:30:54.653820744 +0000 UTC m=+722.444486686" Feb 19 19:30:59 crc kubenswrapper[4787]: I0219 19:30:59.681339 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" Feb 19 19:31:00 crc kubenswrapper[4787]: I0219 19:31:00.066714 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" Feb 19 19:31:09 crc kubenswrapper[4787]: I0219 19:31:09.263647 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:31:09 crc kubenswrapper[4787]: I0219 19:31:09.264201 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.306870 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28"] Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.309016 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.315113 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.322656 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28"] Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.424554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.424681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.424843 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg4pn\" (UniqueName: \"kubernetes.io/projected/05bb6877-4f7a-44ef-9473-256081113294-kube-api-access-mg4pn\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.491842 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp"] Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.493483 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.503838 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp"] Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.526359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.526410 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg4pn\" (UniqueName: \"kubernetes.io/projected/05bb6877-4f7a-44ef-9473-256081113294-kube-api-access-mg4pn\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.526459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.526489 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.526510 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4ps\" (UniqueName: \"kubernetes.io/projected/02139a9b-2832-4a5f-8d79-e553400a8422-kube-api-access-lj4ps\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.526659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.527020 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.528207 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.547728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg4pn\" (UniqueName: \"kubernetes.io/projected/05bb6877-4f7a-44ef-9473-256081113294-kube-api-access-mg4pn\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.628706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.628783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4ps\" (UniqueName: \"kubernetes.io/projected/02139a9b-2832-4a5f-8d79-e553400a8422-kube-api-access-lj4ps\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.628849 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.629655 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.630014 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.635953 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.656839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4ps\" (UniqueName: \"kubernetes.io/projected/02139a9b-2832-4a5f-8d79-e553400a8422-kube-api-access-lj4ps\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:21 crc kubenswrapper[4787]: I0219 19:31:21.811726 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.083667 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28"] Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.100719 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp"] Feb 19 19:31:22 crc kubenswrapper[4787]: W0219 19:31:22.108572 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02139a9b_2832_4a5f_8d79_e553400a8422.slice/crio-13c8b5d83c067cf2dbb2294edfa785147e535e9212a9006b063334bde514090c WatchSource:0}: Error finding container 13c8b5d83c067cf2dbb2294edfa785147e535e9212a9006b063334bde514090c: Status 404 returned error can't find the container with id 13c8b5d83c067cf2dbb2294edfa785147e535e9212a9006b063334bde514090c Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.829661 4787 generic.go:334] "Generic (PLEG): container finished" podID="02139a9b-2832-4a5f-8d79-e553400a8422" containerID="83f6a43672dcb72c50306de16e4181f4076a56ed84fca73d9a671cca088a021f" exitCode=0 Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.829769 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" event={"ID":"02139a9b-2832-4a5f-8d79-e553400a8422","Type":"ContainerDied","Data":"83f6a43672dcb72c50306de16e4181f4076a56ed84fca73d9a671cca088a021f"} Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.829981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" event={"ID":"02139a9b-2832-4a5f-8d79-e553400a8422","Type":"ContainerStarted","Data":"13c8b5d83c067cf2dbb2294edfa785147e535e9212a9006b063334bde514090c"} Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.831260 4787 generic.go:334] "Generic (PLEG): container finished" podID="05bb6877-4f7a-44ef-9473-256081113294" containerID="de4ef3dcb79bcc994304f5bb48a00c3e27bf19c9170dd5ef72dd954988825438" exitCode=0 Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.831289 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" event={"ID":"05bb6877-4f7a-44ef-9473-256081113294","Type":"ContainerDied","Data":"de4ef3dcb79bcc994304f5bb48a00c3e27bf19c9170dd5ef72dd954988825438"} Feb 19 19:31:22 crc kubenswrapper[4787]: I0219 19:31:22.831315 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" event={"ID":"05bb6877-4f7a-44ef-9473-256081113294","Type":"ContainerStarted","Data":"17e6ddfb7ac89a6116060cc280722ed447e61d7ccf0729b01965c1381276b5bd"} Feb 19 19:31:24 crc kubenswrapper[4787]: I0219 19:31:24.848654 4787 generic.go:334] "Generic (PLEG): container finished" podID="02139a9b-2832-4a5f-8d79-e553400a8422" containerID="8856b2627c333f028b20ad6bd350104b66b8c8495ff9418d9bd80cab8794c427" exitCode=0 Feb 19 19:31:24 crc kubenswrapper[4787]: I0219 19:31:24.848763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" event={"ID":"02139a9b-2832-4a5f-8d79-e553400a8422","Type":"ContainerDied","Data":"8856b2627c333f028b20ad6bd350104b66b8c8495ff9418d9bd80cab8794c427"} Feb 19 19:31:24 crc kubenswrapper[4787]: I0219 19:31:24.852226 4787 generic.go:334] "Generic (PLEG): container finished" podID="05bb6877-4f7a-44ef-9473-256081113294" containerID="54b622d5515bfb706aafb993c9d1bede0c2951a5e0d91df8e0ecfc1cd8204fe9" exitCode=0 Feb 19 19:31:24 crc kubenswrapper[4787]: I0219 19:31:24.852255 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" event={"ID":"05bb6877-4f7a-44ef-9473-256081113294","Type":"ContainerDied","Data":"54b622d5515bfb706aafb993c9d1bede0c2951a5e0d91df8e0ecfc1cd8204fe9"} Feb 19 19:31:25 crc kubenswrapper[4787]: I0219 19:31:25.858982 4787 generic.go:334] "Generic (PLEG): container finished" podID="02139a9b-2832-4a5f-8d79-e553400a8422" containerID="88e51bb7de517131075dda0337a6b68cef59c8b743a21295a72b440edce35a32" exitCode=0 Feb 19 19:31:25 crc kubenswrapper[4787]: I0219 19:31:25.859051 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" event={"ID":"02139a9b-2832-4a5f-8d79-e553400a8422","Type":"ContainerDied","Data":"88e51bb7de517131075dda0337a6b68cef59c8b743a21295a72b440edce35a32"} Feb 19 19:31:25 crc kubenswrapper[4787]: I0219 19:31:25.862516 4787 generic.go:334] "Generic (PLEG): container finished" podID="05bb6877-4f7a-44ef-9473-256081113294" containerID="31e1a680888a8059d20e2eefcbfc1de0345d532277ba635f968c3a5e13aaa1ac" exitCode=0 Feb 19 19:31:25 crc kubenswrapper[4787]: I0219 19:31:25.862583 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" event={"ID":"05bb6877-4f7a-44ef-9473-256081113294","Type":"ContainerDied","Data":"31e1a680888a8059d20e2eefcbfc1de0345d532277ba635f968c3a5e13aaa1ac"} Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.138856 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.143979 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.222288 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-bundle\") pod \"05bb6877-4f7a-44ef-9473-256081113294\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.222412 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj4ps\" (UniqueName: \"kubernetes.io/projected/02139a9b-2832-4a5f-8d79-e553400a8422-kube-api-access-lj4ps\") pod \"02139a9b-2832-4a5f-8d79-e553400a8422\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.222449 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-util\") pod \"02139a9b-2832-4a5f-8d79-e553400a8422\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.222483 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg4pn\" (UniqueName: \"kubernetes.io/projected/05bb6877-4f7a-44ef-9473-256081113294-kube-api-access-mg4pn\") pod \"05bb6877-4f7a-44ef-9473-256081113294\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.222503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-util\") pod \"05bb6877-4f7a-44ef-9473-256081113294\" (UID: \"05bb6877-4f7a-44ef-9473-256081113294\") " Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.222531 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-bundle\") pod \"02139a9b-2832-4a5f-8d79-e553400a8422\" (UID: \"02139a9b-2832-4a5f-8d79-e553400a8422\") " Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.223767 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-bundle" (OuterVolumeSpecName: "bundle") pod "02139a9b-2832-4a5f-8d79-e553400a8422" (UID: "02139a9b-2832-4a5f-8d79-e553400a8422"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.224012 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-bundle" (OuterVolumeSpecName: "bundle") pod "05bb6877-4f7a-44ef-9473-256081113294" (UID: "05bb6877-4f7a-44ef-9473-256081113294"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.228379 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bb6877-4f7a-44ef-9473-256081113294-kube-api-access-mg4pn" (OuterVolumeSpecName: "kube-api-access-mg4pn") pod "05bb6877-4f7a-44ef-9473-256081113294" (UID: "05bb6877-4f7a-44ef-9473-256081113294"). InnerVolumeSpecName "kube-api-access-mg4pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.229704 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02139a9b-2832-4a5f-8d79-e553400a8422-kube-api-access-lj4ps" (OuterVolumeSpecName: "kube-api-access-lj4ps") pod "02139a9b-2832-4a5f-8d79-e553400a8422" (UID: "02139a9b-2832-4a5f-8d79-e553400a8422"). InnerVolumeSpecName "kube-api-access-lj4ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.247723 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-util" (OuterVolumeSpecName: "util") pod "02139a9b-2832-4a5f-8d79-e553400a8422" (UID: "02139a9b-2832-4a5f-8d79-e553400a8422"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.250019 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-util" (OuterVolumeSpecName: "util") pod "05bb6877-4f7a-44ef-9473-256081113294" (UID: "05bb6877-4f7a-44ef-9473-256081113294"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.325147 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj4ps\" (UniqueName: \"kubernetes.io/projected/02139a9b-2832-4a5f-8d79-e553400a8422-kube-api-access-lj4ps\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.325209 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-util\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.325230 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg4pn\" (UniqueName: \"kubernetes.io/projected/05bb6877-4f7a-44ef-9473-256081113294-kube-api-access-mg4pn\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.325250 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-util\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.325267 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02139a9b-2832-4a5f-8d79-e553400a8422-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.325283 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05bb6877-4f7a-44ef-9473-256081113294-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.882068 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" event={"ID":"05bb6877-4f7a-44ef-9473-256081113294","Type":"ContainerDied","Data":"17e6ddfb7ac89a6116060cc280722ed447e61d7ccf0729b01965c1381276b5bd"} Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.882126 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e6ddfb7ac89a6116060cc280722ed447e61d7ccf0729b01965c1381276b5bd" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.882134 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.885664 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" event={"ID":"02139a9b-2832-4a5f-8d79-e553400a8422","Type":"ContainerDied","Data":"13c8b5d83c067cf2dbb2294edfa785147e535e9212a9006b063334bde514090c"} Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.885730 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c8b5d83c067cf2dbb2294edfa785147e535e9212a9006b063334bde514090c" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.885868 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp" Feb 19 19:31:27 crc kubenswrapper[4787]: I0219 19:31:27.966265 4787 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.062959 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xxqk8"] Feb 19 19:31:28 crc kubenswrapper[4787]: E0219 19:31:28.063309 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="util" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063326 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="util" Feb 19 19:31:28 crc kubenswrapper[4787]: E0219 19:31:28.063342 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="pull" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063349 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="pull" Feb 19 19:31:28 crc kubenswrapper[4787]: E0219 19:31:28.063366 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="pull" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063373 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="pull" Feb 19 19:31:28 crc kubenswrapper[4787]: E0219 19:31:28.063383 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="extract" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063391 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="extract" Feb 19 19:31:28 crc kubenswrapper[4787]: E0219 19:31:28.063402 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="util" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063408 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="util" Feb 19 19:31:28 crc kubenswrapper[4787]: E0219 19:31:28.063422 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="extract" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063429 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="extract" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063575 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="02139a9b-2832-4a5f-8d79-e553400a8422" containerName="extract" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.063595 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bb6877-4f7a-44ef-9473-256081113294" containerName="extract" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.064740 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.073517 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxqk8"] Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.143890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-catalog-content\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.143991 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-utilities\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.144060 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45gc\" (UniqueName: \"kubernetes.io/projected/c9547f7d-d841-4fa1-aa11-3b990b056bf8-kube-api-access-p45gc\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.245294 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-utilities\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.245411 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45gc\" (UniqueName: \"kubernetes.io/projected/c9547f7d-d841-4fa1-aa11-3b990b056bf8-kube-api-access-p45gc\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.245474 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-catalog-content\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.245899 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-utilities\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.245990 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-catalog-content\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.263638 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45gc\" (UniqueName: \"kubernetes.io/projected/c9547f7d-d841-4fa1-aa11-3b990b056bf8-kube-api-access-p45gc\") pod \"redhat-operators-xxqk8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.383800 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.799061 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxqk8"] Feb 19 19:31:28 crc kubenswrapper[4787]: W0219 19:31:28.803627 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9547f7d_d841_4fa1_aa11_3b990b056bf8.slice/crio-71d6978f8dfeb3b46bc465ec496983b4af77615381ace7d7b9b8fcd1e84abdbb WatchSource:0}: Error finding container 71d6978f8dfeb3b46bc465ec496983b4af77615381ace7d7b9b8fcd1e84abdbb: Status 404 returned error can't find the container with id 71d6978f8dfeb3b46bc465ec496983b4af77615381ace7d7b9b8fcd1e84abdbb Feb 19 19:31:28 crc kubenswrapper[4787]: I0219 19:31:28.903820 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerStarted","Data":"71d6978f8dfeb3b46bc465ec496983b4af77615381ace7d7b9b8fcd1e84abdbb"} Feb 19 19:31:29 crc kubenswrapper[4787]: I0219 19:31:29.909103 4787 generic.go:334] "Generic (PLEG): container finished" podID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerID="74a61b7e200fe6963b56682ad2634c7a8f5fd9a21b3cbd7619dc6dfe51ccbaa4" exitCode=0 Feb 19 19:31:29 crc kubenswrapper[4787]: I0219 19:31:29.909167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerDied","Data":"74a61b7e200fe6963b56682ad2634c7a8f5fd9a21b3cbd7619dc6dfe51ccbaa4"} Feb 19 19:31:30 crc kubenswrapper[4787]: I0219 19:31:30.921096 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerStarted","Data":"e4a0e76a81bab8964ca1ffe098e56992b6f56d182925545887f858199abe6c84"} Feb 19 19:31:31 crc kubenswrapper[4787]: I0219 19:31:31.929415 4787 generic.go:334] "Generic (PLEG): container finished" podID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerID="e4a0e76a81bab8964ca1ffe098e56992b6f56d182925545887f858199abe6c84" exitCode=0 Feb 19 19:31:31 crc kubenswrapper[4787]: I0219 19:31:31.929467 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerDied","Data":"e4a0e76a81bab8964ca1ffe098e56992b6f56d182925545887f858199abe6c84"} Feb 19 19:31:32 crc kubenswrapper[4787]: I0219 19:31:32.938570 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerStarted","Data":"d4c1b55566dfe325f7bcc095defbdfd9ebd92163ed9a53afe76b83a8ea801ccc"} Feb 19 19:31:32 crc kubenswrapper[4787]: I0219 19:31:32.956008 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xxqk8" podStartSLOduration=2.5411867299999997 podStartE2EDuration="4.955992492s" podCreationTimestamp="2026-02-19 19:31:28 +0000 UTC" firstStartedPulling="2026-02-19 19:31:29.911059479 +0000 UTC m=+757.701725421" lastFinishedPulling="2026-02-19 19:31:32.325865241 +0000 UTC m=+760.116531183" observedRunningTime="2026-02-19 19:31:32.955760485 +0000 UTC m=+760.746426437" watchObservedRunningTime="2026-02-19 19:31:32.955992492 +0000 UTC m=+760.746658434" Feb 19 19:31:38 crc kubenswrapper[4787]: I0219 19:31:38.384498 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:38 crc kubenswrapper[4787]: I0219 19:31:38.384845 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.223679 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8"] Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.225708 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.227328 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.227690 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-ph5dr" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.228042 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.228078 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.228199 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.229865 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.247428 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8"] Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.289930 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.289992 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.290041 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.290651 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eddfeaf72585fc8755796a91f30a98dc405a75dee35e13b5751f5a4b560c364c"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.290706 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://eddfeaf72585fc8755796a91f30a98dc405a75dee35e13b5751f5a4b560c364c" gracePeriod=600 Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.307225 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-apiservice-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.307340 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.307371 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-webhook-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.307408 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/aae80a85-0afc-42a9-817a-57570462dee1-manager-config\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.307448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrnm\" (UniqueName: \"kubernetes.io/projected/aae80a85-0afc-42a9-817a-57570462dee1-kube-api-access-djrnm\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.409033 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-apiservice-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.409082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.409101 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-webhook-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.409141 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/aae80a85-0afc-42a9-817a-57570462dee1-manager-config\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.409177 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djrnm\" (UniqueName: \"kubernetes.io/projected/aae80a85-0afc-42a9-817a-57570462dee1-kube-api-access-djrnm\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.410511 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/aae80a85-0afc-42a9-817a-57570462dee1-manager-config\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.422070 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-apiservice-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.424570 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.426148 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aae80a85-0afc-42a9-817a-57570462dee1-webhook-cert\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.438333 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xxqk8" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="registry-server" probeResult="failure" output=< Feb 19 19:31:39 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:31:39 crc kubenswrapper[4787]: > Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.448740 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djrnm\" (UniqueName: \"kubernetes.io/projected/aae80a85-0afc-42a9-817a-57570462dee1-kube-api-access-djrnm\") pod \"loki-operator-controller-manager-55fc987df5-9spp8\" (UID: \"aae80a85-0afc-42a9-817a-57570462dee1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.544349 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.854147 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8"] Feb 19 19:31:39 crc kubenswrapper[4787]: W0219 19:31:39.860684 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaae80a85_0afc_42a9_817a_57570462dee1.slice/crio-624377cf9e9a5f7a2bb07f634061983184a67323ce46bcdacba4a41d7b9d5d3a WatchSource:0}: Error finding container 624377cf9e9a5f7a2bb07f634061983184a67323ce46bcdacba4a41d7b9d5d3a: Status 404 returned error can't find the container with id 624377cf9e9a5f7a2bb07f634061983184a67323ce46bcdacba4a41d7b9d5d3a Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.986957 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="eddfeaf72585fc8755796a91f30a98dc405a75dee35e13b5751f5a4b560c364c" exitCode=0 Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.987028 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"eddfeaf72585fc8755796a91f30a98dc405a75dee35e13b5751f5a4b560c364c"} Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.987407 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"908353e6e26c8eb14aa15cfd6585d127a5cf2fd790c45d696549088ebf5dab4a"} Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.987432 4787 scope.go:117] "RemoveContainer" containerID="3c63beec0b5874f1d9e9f9dbb1f62ad403c495529a52460b8bf62f93c192ccf6" Feb 19 19:31:39 crc kubenswrapper[4787]: I0219 19:31:39.989247 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" event={"ID":"aae80a85-0afc-42a9-817a-57570462dee1","Type":"ContainerStarted","Data":"624377cf9e9a5f7a2bb07f634061983184a67323ce46bcdacba4a41d7b9d5d3a"} Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.586855 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-vddhp"] Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.587689 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.591669 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.592128 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.600495 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-zptk8" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.600844 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-vddhp"] Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.644877 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t9js\" (UniqueName: \"kubernetes.io/projected/c3fc79ee-4854-4886-a549-0baadec47ffd-kube-api-access-2t9js\") pod \"cluster-logging-operator-c769fd969-vddhp\" (UID: \"c3fc79ee-4854-4886-a549-0baadec47ffd\") " pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.746307 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t9js\" (UniqueName: \"kubernetes.io/projected/c3fc79ee-4854-4886-a549-0baadec47ffd-kube-api-access-2t9js\") pod \"cluster-logging-operator-c769fd969-vddhp\" (UID: \"c3fc79ee-4854-4886-a549-0baadec47ffd\") " pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.784631 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t9js\" (UniqueName: \"kubernetes.io/projected/c3fc79ee-4854-4886-a549-0baadec47ffd-kube-api-access-2t9js\") pod \"cluster-logging-operator-c769fd969-vddhp\" (UID: \"c3fc79ee-4854-4886-a549-0baadec47ffd\") " pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" Feb 19 19:31:40 crc kubenswrapper[4787]: I0219 19:31:40.945301 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" Feb 19 19:31:41 crc kubenswrapper[4787]: I0219 19:31:41.260601 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-vddhp"] Feb 19 19:31:41 crc kubenswrapper[4787]: W0219 19:31:41.270749 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3fc79ee_4854_4886_a549_0baadec47ffd.slice/crio-4ad5f3d7fcd066b05f11ef12e80ddb724d9f58061316673b141e6dfc69ecadc5 WatchSource:0}: Error finding container 4ad5f3d7fcd066b05f11ef12e80ddb724d9f58061316673b141e6dfc69ecadc5: Status 404 returned error can't find the container with id 4ad5f3d7fcd066b05f11ef12e80ddb724d9f58061316673b141e6dfc69ecadc5 Feb 19 19:31:42 crc kubenswrapper[4787]: I0219 19:31:42.031446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" event={"ID":"c3fc79ee-4854-4886-a549-0baadec47ffd","Type":"ContainerStarted","Data":"4ad5f3d7fcd066b05f11ef12e80ddb724d9f58061316673b141e6dfc69ecadc5"} Feb 19 19:31:45 crc kubenswrapper[4787]: I0219 19:31:45.105683 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" event={"ID":"aae80a85-0afc-42a9-817a-57570462dee1","Type":"ContainerStarted","Data":"82af365d739726b31975ef733236a15ebe90214686e2605afa3d19f909cb99be"} Feb 19 19:31:48 crc kubenswrapper[4787]: I0219 19:31:48.441032 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:48 crc kubenswrapper[4787]: I0219 19:31:48.500760 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:51 crc kubenswrapper[4787]: I0219 19:31:51.451706 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxqk8"] Feb 19 19:31:51 crc kubenswrapper[4787]: I0219 19:31:51.452527 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xxqk8" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="registry-server" containerID="cri-o://d4c1b55566dfe325f7bcc095defbdfd9ebd92163ed9a53afe76b83a8ea801ccc" gracePeriod=2 Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.160982 4787 generic.go:334] "Generic (PLEG): container finished" podID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerID="d4c1b55566dfe325f7bcc095defbdfd9ebd92163ed9a53afe76b83a8ea801ccc" exitCode=0 Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.161035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerDied","Data":"d4c1b55566dfe325f7bcc095defbdfd9ebd92163ed9a53afe76b83a8ea801ccc"} Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.757388 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.835665 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-catalog-content\") pod \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.835779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p45gc\" (UniqueName: \"kubernetes.io/projected/c9547f7d-d841-4fa1-aa11-3b990b056bf8-kube-api-access-p45gc\") pod \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.835812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-utilities\") pod \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\" (UID: \"c9547f7d-d841-4fa1-aa11-3b990b056bf8\") " Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.837001 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-utilities" (OuterVolumeSpecName: "utilities") pod "c9547f7d-d841-4fa1-aa11-3b990b056bf8" (UID: "c9547f7d-d841-4fa1-aa11-3b990b056bf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.844581 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9547f7d-d841-4fa1-aa11-3b990b056bf8-kube-api-access-p45gc" (OuterVolumeSpecName: "kube-api-access-p45gc") pod "c9547f7d-d841-4fa1-aa11-3b990b056bf8" (UID: "c9547f7d-d841-4fa1-aa11-3b990b056bf8"). InnerVolumeSpecName "kube-api-access-p45gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.938036 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p45gc\" (UniqueName: \"kubernetes.io/projected/c9547f7d-d841-4fa1-aa11-3b990b056bf8-kube-api-access-p45gc\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.938494 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:52 crc kubenswrapper[4787]: I0219 19:31:52.991407 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9547f7d-d841-4fa1-aa11-3b990b056bf8" (UID: "c9547f7d-d841-4fa1-aa11-3b990b056bf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.039483 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9547f7d-d841-4fa1-aa11-3b990b056bf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.170009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" event={"ID":"aae80a85-0afc-42a9-817a-57570462dee1","Type":"ContainerStarted","Data":"53c5618c67e09caabf1f942a0236d5eecc82e06ba4365306591b5855b892de23"} Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.172054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" event={"ID":"c3fc79ee-4854-4886-a549-0baadec47ffd","Type":"ContainerStarted","Data":"7e58842977155044179bf948f72e4e8111a854f00c29ec14d35ccbf695929746"} Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.175623 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxqk8" event={"ID":"c9547f7d-d841-4fa1-aa11-3b990b056bf8","Type":"ContainerDied","Data":"71d6978f8dfeb3b46bc465ec496983b4af77615381ace7d7b9b8fcd1e84abdbb"} Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.175667 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxqk8" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.175676 4787 scope.go:117] "RemoveContainer" containerID="d4c1b55566dfe325f7bcc095defbdfd9ebd92163ed9a53afe76b83a8ea801ccc" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.193769 4787 scope.go:117] "RemoveContainer" containerID="e4a0e76a81bab8964ca1ffe098e56992b6f56d182925545887f858199abe6c84" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.210227 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" podStartSLOduration=1.287925199 podStartE2EDuration="14.210209176s" podCreationTimestamp="2026-02-19 19:31:39 +0000 UTC" firstStartedPulling="2026-02-19 19:31:39.864662013 +0000 UTC m=+767.655327945" lastFinishedPulling="2026-02-19 19:31:52.78694598 +0000 UTC m=+780.577611922" observedRunningTime="2026-02-19 19:31:53.198556346 +0000 UTC m=+780.989222288" watchObservedRunningTime="2026-02-19 19:31:53.210209176 +0000 UTC m=+781.000875118" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.216687 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxqk8"] Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.223643 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xxqk8"] Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.226205 4787 scope.go:117] "RemoveContainer" containerID="74a61b7e200fe6963b56682ad2634c7a8f5fd9a21b3cbd7619dc6dfe51ccbaa4" Feb 19 19:31:53 crc kubenswrapper[4787]: I0219 19:31:53.245392 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-vddhp" podStartSLOduration=1.76902446 podStartE2EDuration="13.245265028s" podCreationTimestamp="2026-02-19 19:31:40 +0000 UTC" firstStartedPulling="2026-02-19 19:31:41.276575707 +0000 UTC m=+769.067241649" lastFinishedPulling="2026-02-19 19:31:52.752816275 +0000 UTC m=+780.543482217" observedRunningTime="2026-02-19 19:31:53.238224623 +0000 UTC m=+781.028890565" watchObservedRunningTime="2026-02-19 19:31:53.245265028 +0000 UTC m=+781.035930970" Feb 19 19:31:54 crc kubenswrapper[4787]: I0219 19:31:54.181706 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:54 crc kubenswrapper[4787]: I0219 19:31:54.183101 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 19:31:54 crc kubenswrapper[4787]: I0219 19:31:54.899672 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" path="/var/lib/kubelet/pods/c9547f7d-d841-4fa1-aa11-3b990b056bf8/volumes" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.189684 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 19 19:32:00 crc kubenswrapper[4787]: E0219 19:32:00.196812 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="extract-utilities" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.196867 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="extract-utilities" Feb 19 19:32:00 crc kubenswrapper[4787]: E0219 19:32:00.196891 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="registry-server" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.196906 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="registry-server" Feb 19 19:32:00 crc kubenswrapper[4787]: E0219 19:32:00.196932 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="extract-content" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.196944 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="extract-content" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.197251 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9547f7d-d841-4fa1-aa11-3b990b056bf8" containerName="registry-server" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.198207 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.200519 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.202000 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.206633 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.343685 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5l9w\" (UniqueName: \"kubernetes.io/projected/6d0d0c02-c7fa-4aaf-ab22-bb4098302b75-kube-api-access-f5l9w\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") " pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.343926 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") " pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.445196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5l9w\" (UniqueName: \"kubernetes.io/projected/6d0d0c02-c7fa-4aaf-ab22-bb4098302b75-kube-api-access-f5l9w\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") " pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.445999 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") " pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.451192 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.451249 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f9ff669f53524b1a172315cf6b6049b8acc292cc2f91a2180fc528495bc321a2/globalmount\"" pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.470660 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5l9w\" (UniqueName: \"kubernetes.io/projected/6d0d0c02-c7fa-4aaf-ab22-bb4098302b75-kube-api-access-f5l9w\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") " pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.499426 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ac156194-601c-44bb-9cbc-8ec63c6371c9\") pod \"minio\" (UID: \"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75\") " pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.533218 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 19 19:32:00 crc kubenswrapper[4787]: I0219 19:32:00.768747 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 19 19:32:01 crc kubenswrapper[4787]: I0219 19:32:01.247296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75","Type":"ContainerStarted","Data":"3f0d042246d94d5d99667158f8cbe265282cdb9a7e27e8614ed1782474ded079"} Feb 19 19:32:05 crc kubenswrapper[4787]: I0219 19:32:05.286772 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6d0d0c02-c7fa-4aaf-ab22-bb4098302b75","Type":"ContainerStarted","Data":"a364a29a37e649a7662002c0b89bf36425146de9e57c22ab580092cb7a20fa51"} Feb 19 19:32:05 crc kubenswrapper[4787]: I0219 19:32:05.306851 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.994303814 podStartE2EDuration="8.306823759s" podCreationTimestamp="2026-02-19 19:31:57 +0000 UTC" firstStartedPulling="2026-02-19 19:32:00.774727998 +0000 UTC m=+788.565393960" lastFinishedPulling="2026-02-19 19:32:04.087247923 +0000 UTC m=+791.877913905" observedRunningTime="2026-02-19 19:32:05.298681091 +0000 UTC m=+793.089347053" watchObservedRunningTime="2026-02-19 19:32:05.306823759 +0000 UTC m=+793.097489711" Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.981246 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx"] Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.982525 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.990549 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-8xd47" Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.990584 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.990853 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.990867 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 19 19:32:11 crc kubenswrapper[4787]: I0219 19:32:11.990900 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.001658 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.139259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6f8721-8336-47fa-b27a-6c897006b94e-config\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.139342 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8t5\" (UniqueName: \"kubernetes.io/projected/2c6f8721-8336-47fa-b27a-6c897006b94e-kube-api-access-bm8t5\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.139659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.139871 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.140019 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.169208 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.170226 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.173754 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.173937 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.173972 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.195789 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.224323 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-htw48"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.225362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.229784 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.230273 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.243295 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-htw48"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.260568 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.260652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6f8721-8336-47fa-b27a-6c897006b94e-config\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.260704 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8t5\" (UniqueName: \"kubernetes.io/projected/2c6f8721-8336-47fa-b27a-6c897006b94e-kube-api-access-bm8t5\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.260803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.260851 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.262437 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.264640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6f8721-8336-47fa-b27a-6c897006b94e-config\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.271046 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.282320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2c6f8721-8336-47fa-b27a-6c897006b94e-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.289221 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8t5\" (UniqueName: \"kubernetes.io/projected/2c6f8721-8336-47fa-b27a-6c897006b94e-kube-api-access-bm8t5\") pod \"logging-loki-distributor-5d5548c9f5-dkzcx\" (UID: \"2c6f8721-8336-47fa-b27a-6c897006b94e\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.323135 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.361958 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0d4193-66a0-48c4-8932-8827eaac2c2b-config\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364664 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364726 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364746 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-config\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsg4q\" (UniqueName: \"kubernetes.io/projected/ca0d4193-66a0-48c4-8932-8827eaac2c2b-kube-api-access-wsg4q\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364863 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtnfn\" (UniqueName: \"kubernetes.io/projected/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-kube-api-access-vtnfn\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.362902 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-65d54b8875-tjbh7"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.364944 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.366178 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.370464 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.370639 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.370752 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.370797 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-khzwn" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.370862 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.371190 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.371327 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-65d54b8875-tjbh7"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.379585 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-65d54b8875-96vjl"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.380936 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.384457 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-65d54b8875-96vjl"] Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.466802 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.466875 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-rbac\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.466915 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0d4193-66a0-48c4-8932-8827eaac2c2b-config\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.466937 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-lokistack-gateway\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.466959 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.466976 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjhn\" (UniqueName: \"kubernetes.io/projected/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-kube-api-access-2bjhn\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467013 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467033 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467051 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-config\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467067 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tls-secret\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467083 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467130 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsg4q\" (UniqueName: \"kubernetes.io/projected/ca0d4193-66a0-48c4-8932-8827eaac2c2b-kube-api-access-wsg4q\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467152 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tenants\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467190 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtnfn\" (UniqueName: \"kubernetes.io/projected/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-kube-api-access-vtnfn\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467215 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467230 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.467250 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.469001 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.469082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-config\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.469512 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.469697 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0d4193-66a0-48c4-8932-8827eaac2c2b-config\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.473357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.473817 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.476217 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.477386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.478472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/ca0d4193-66a0-48c4-8932-8827eaac2c2b-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.489404 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtnfn\" (UniqueName: \"kubernetes.io/projected/9d8ca7ab-f667-423c-926e-a9e2cfc10c1b-kube-api-access-vtnfn\") pod \"logging-loki-query-frontend-6d6859c548-htw48\" (UID: \"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.490007 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsg4q\" (UniqueName: \"kubernetes.io/projected/ca0d4193-66a0-48c4-8932-8827eaac2c2b-kube-api-access-wsg4q\") pod \"logging-loki-querier-76bf7b6d45-nj9wt\" (UID: \"ca0d4193-66a0-48c4-8932-8827eaac2c2b\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.544273 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.571436 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.571885 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tls-secret\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.571922 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tenants\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.571941 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.571963 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-rbac\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.571990 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572040 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-lokistack-gateway\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572068 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-rbac\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572093 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnvcg\" (UniqueName: \"kubernetes.io/projected/ffe2a444-f47e-4193-b322-5943bf473b44-kube-api-access-pnvcg\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572121 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-lokistack-gateway\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bjhn\" (UniqueName: \"kubernetes.io/projected/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-kube-api-access-2bjhn\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572180 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tenants\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572209 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tls-secret\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.572228 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.573215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: E0219 19:32:12.573306 4787 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 19 19:32:12 crc kubenswrapper[4787]: E0219 19:32:12.573350 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tls-secret podName:47705ce6-ef81-47a2-bcd3-a10b7bb9317a nodeName:}" failed. No retries permitted until 2026-02-19 19:32:13.073335705 +0000 UTC m=+800.864001647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tls-secret") pod "logging-loki-gateway-65d54b8875-tjbh7" (UID: "47705ce6-ef81-47a2-bcd3-a10b7bb9317a") : secret "logging-loki-gateway-http" not found Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.573850 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-lokistack-gateway\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.574281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-rbac\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.575665 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tenants\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.576041 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.583290 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.591230 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bjhn\" (UniqueName: \"kubernetes.io/projected/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-kube-api-access-2bjhn\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673505 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-lokistack-gateway\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnvcg\" (UniqueName: \"kubernetes.io/projected/ffe2a444-f47e-4193-b322-5943bf473b44-kube-api-access-pnvcg\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673570 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673597 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tenants\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673686 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tls-secret\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.673712 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-rbac\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.674764 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-rbac\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: E0219 19:32:12.675266 4787 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 19 19:32:12 crc kubenswrapper[4787]: E0219 19:32:12.675351 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tls-secret podName:ffe2a444-f47e-4193-b322-5943bf473b44 nodeName:}" failed. No retries permitted until 2026-02-19 19:32:13.175327228 +0000 UTC m=+800.965993250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tls-secret") pod "logging-loki-gateway-65d54b8875-96vjl" (UID: "ffe2a444-f47e-4193-b322-5943bf473b44") : secret "logging-loki-gateway-http" not found Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.675995 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-lokistack-gateway\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.676281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.677017 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.686032 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.688509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tenants\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.709061 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnvcg\" (UniqueName: \"kubernetes.io/projected/ffe2a444-f47e-4193-b322-5943bf473b44-kube-api-access-pnvcg\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.786433 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:12 crc kubenswrapper[4787]: I0219 19:32:12.789406 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.011239 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-htw48"] Feb 19 19:32:13 crc kubenswrapper[4787]: W0219 19:32:13.012854 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d8ca7ab_f667_423c_926e_a9e2cfc10c1b.slice/crio-3353eb094708c3e7fe7b402c898e523229cb40c3fa87f0dd36a9a1c928871175 WatchSource:0}: Error finding container 3353eb094708c3e7fe7b402c898e523229cb40c3fa87f0dd36a9a1c928871175: Status 404 returned error can't find the container with id 3353eb094708c3e7fe7b402c898e523229cb40c3fa87f0dd36a9a1c928871175 Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.079694 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tls-secret\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.083680 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/47705ce6-ef81-47a2-bcd3-a10b7bb9317a-tls-secret\") pod \"logging-loki-gateway-65d54b8875-tjbh7\" (UID: \"47705ce6-ef81-47a2-bcd3-a10b7bb9317a\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.144323 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.145246 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.147371 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.147459 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.156937 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.181139 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tls-secret\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.185514 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ffe2a444-f47e-4193-b322-5943bf473b44-tls-secret\") pod \"logging-loki-gateway-65d54b8875-96vjl\" (UID: \"ffe2a444-f47e-4193-b322-5943bf473b44\") " pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.199810 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.200860 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.202342 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.206348 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.211319 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.220359 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282000 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282128 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282190 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282266 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7l72\" (UniqueName: \"kubernetes.io/projected/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-kube-api-access-d7l72\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282352 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282381 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282450 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-config\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.282572 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.285222 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.286035 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.288539 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.288713 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.301346 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.317795 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.335502 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.355535 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" event={"ID":"ca0d4193-66a0-48c4-8932-8827eaac2c2b","Type":"ContainerStarted","Data":"e3e0639d1fb7b6127960a1192c172b3aa2097d5dbe047cc4763c5f65460d3cd1"} Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.356486 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" event={"ID":"2c6f8721-8336-47fa-b27a-6c897006b94e","Type":"ContainerStarted","Data":"1094f5a356af6e3d57f0c267ad4a2b254ae2191a4d1bb7aee60bc4bfb4791484"} Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.357622 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" event={"ID":"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b","Type":"ContainerStarted","Data":"3353eb094708c3e7fe7b402c898e523229cb40c3fa87f0dd36a9a1c928871175"} Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383655 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383682 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383775 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383832 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383888 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383922 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.383975 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384033 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67m6w\" (UniqueName: \"kubernetes.io/projected/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-kube-api-access-67m6w\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384141 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384194 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384253 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7l72\" (UniqueName: \"kubernetes.io/projected/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-kube-api-access-d7l72\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384276 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvnk\" (UniqueName: \"kubernetes.io/projected/46ee23a2-1b37-42e7-899f-5c1c70a6755b-kube-api-access-ftvnk\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384302 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-config\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ee23a2-1b37-42e7-899f-5c1c70a6755b-config\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384453 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-config\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.384492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.385872 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-config\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.385985 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.390635 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.390647 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.390681 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e139bef917bcd45156e8bc014c8ea48b85a5544324456406386aa8e9c47a5b9b/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.390681 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/92db2652ed1af2006a4b749a1b4045f695176d025ba4038835f8d56a4b4e5802/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.392812 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.394484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.395391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.401203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7l72\" (UniqueName: \"kubernetes.io/projected/74b9f2e5-3b9f-4af9-990f-147a1c6f8943-kube-api-access-d7l72\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.430031 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b7e9a5-a24a-4119-8def-ef366385dbe8\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.466347 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ff85970d-3a72-4e5d-b15c-f17cf5306fab\") pod \"logging-loki-ingester-0\" (UID: \"74b9f2e5-3b9f-4af9-990f-147a1c6f8943\") " pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.476564 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485637 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485675 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485714 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485741 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485762 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485782 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485809 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67m6w\" (UniqueName: \"kubernetes.io/projected/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-kube-api-access-67m6w\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485891 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftvnk\" (UniqueName: \"kubernetes.io/projected/46ee23a2-1b37-42e7-899f-5c1c70a6755b-kube-api-access-ftvnk\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ee23a2-1b37-42e7-899f-5c1c70a6755b-config\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485937 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.485958 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-config\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.486829 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-config\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.487911 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ee23a2-1b37-42e7-899f-5c1c70a6755b-config\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.487966 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.488662 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.490816 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.492023 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.492135 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.492276 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.495049 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/46ee23a2-1b37-42e7-899f-5c1c70a6755b-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.495231 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.495281 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/91f04e65b7b18edba6a98264953b5baf799eb76a4aac4405283ed64d08d6625c/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.495419 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.495474 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7627b51db7ee528a6f084e158ea1c707ea35f5e882603e448bfe1fabd1a2033f/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.495579 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.505477 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftvnk\" (UniqueName: \"kubernetes.io/projected/46ee23a2-1b37-42e7-899f-5c1c70a6755b-kube-api-access-ftvnk\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.505644 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67m6w\" (UniqueName: \"kubernetes.io/projected/47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca-kube-api-access-67m6w\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.521484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1442b0fb-9c5c-41df-80ce-1feef8a82c91\") pod \"logging-loki-compactor-0\" (UID: \"46ee23a2-1b37-42e7-899f-5c1c70a6755b\") " pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.523058 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-653156c7-8476-4f66-9bf8-7aa5ce7b521f\") pod \"logging-loki-index-gateway-0\" (UID: \"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.621512 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.809864 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-65d54b8875-tjbh7"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.814905 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:13 crc kubenswrapper[4787]: W0219 19:32:13.815163 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe2a444_f47e_4193_b322_5943bf473b44.slice/crio-c4b48ea489008eff7df49ccc6aee804249a1c2f8296039c8e0fc4b39db4b51a3 WatchSource:0}: Error finding container c4b48ea489008eff7df49ccc6aee804249a1c2f8296039c8e0fc4b39db4b51a3: Status 404 returned error can't find the container with id c4b48ea489008eff7df49ccc6aee804249a1c2f8296039c8e0fc4b39db4b51a3 Feb 19 19:32:13 crc kubenswrapper[4787]: W0219 19:32:13.817121 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47705ce6_ef81_47a2_bcd3_a10b7bb9317a.slice/crio-3c398d91d3a55c06c66cc5a01d08c8ce6b9a56d646e3677cb4a14d787dd0291b WatchSource:0}: Error finding container 3c398d91d3a55c06c66cc5a01d08c8ce6b9a56d646e3677cb4a14d787dd0291b: Status 404 returned error can't find the container with id 3c398d91d3a55c06c66cc5a01d08c8ce6b9a56d646e3677cb4a14d787dd0291b Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.818288 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-65d54b8875-96vjl"] Feb 19 19:32:13 crc kubenswrapper[4787]: I0219 19:32:13.913547 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.023861 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 19 19:32:14 crc kubenswrapper[4787]: W0219 19:32:14.035454 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47f3a5fe_a7c4_47d7_a8b3_a367f2eaccca.slice/crio-e77a30f7d89df9c3c205dfaf89440c83b87eddd51bcae957ba920ff514e9e2a5 WatchSource:0}: Error finding container e77a30f7d89df9c3c205dfaf89440c83b87eddd51bcae957ba920ff514e9e2a5: Status 404 returned error can't find the container with id e77a30f7d89df9c3c205dfaf89440c83b87eddd51bcae957ba920ff514e9e2a5 Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.252508 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 19 19:32:14 crc kubenswrapper[4787]: W0219 19:32:14.308256 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46ee23a2_1b37_42e7_899f_5c1c70a6755b.slice/crio-66a2d904f87cad9607c73b3e6a51562b4070ea201e4a324737239a37b45d211c WatchSource:0}: Error finding container 66a2d904f87cad9607c73b3e6a51562b4070ea201e4a324737239a37b45d211c: Status 404 returned error can't find the container with id 66a2d904f87cad9607c73b3e6a51562b4070ea201e4a324737239a37b45d211c Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.366028 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca","Type":"ContainerStarted","Data":"e77a30f7d89df9c3c205dfaf89440c83b87eddd51bcae957ba920ff514e9e2a5"} Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.367667 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"46ee23a2-1b37-42e7-899f-5c1c70a6755b","Type":"ContainerStarted","Data":"66a2d904f87cad9607c73b3e6a51562b4070ea201e4a324737239a37b45d211c"} Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.369108 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" event={"ID":"ffe2a444-f47e-4193-b322-5943bf473b44","Type":"ContainerStarted","Data":"c4b48ea489008eff7df49ccc6aee804249a1c2f8296039c8e0fc4b39db4b51a3"} Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.370295 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"74b9f2e5-3b9f-4af9-990f-147a1c6f8943","Type":"ContainerStarted","Data":"932054bafe71cc1b7d2ed3cb7db60d25f50c4be67f0bf16d535a80f5b2d0b17d"} Feb 19 19:32:14 crc kubenswrapper[4787]: I0219 19:32:14.371537 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" event={"ID":"47705ce6-ef81-47a2-bcd3-a10b7bb9317a","Type":"ContainerStarted","Data":"3c398d91d3a55c06c66cc5a01d08c8ce6b9a56d646e3677cb4a14d787dd0291b"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.398262 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" event={"ID":"ca0d4193-66a0-48c4-8932-8827eaac2c2b","Type":"ContainerStarted","Data":"77892060ae4d06a7339cc59062badf64c0245d22274e75eedabf51b1cc242a06"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.399120 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.401879 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" event={"ID":"2c6f8721-8336-47fa-b27a-6c897006b94e","Type":"ContainerStarted","Data":"512cb2beab1a51254c66eaf08ed4defddd7018634269081618aac5215957f6e6"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.402019 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.404802 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"74b9f2e5-3b9f-4af9-990f-147a1c6f8943","Type":"ContainerStarted","Data":"13d6d91aef38bfdb4ad3d68437f64c533b985a4f14b457ad0950200f6ceb73c4"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.404901 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.407366 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" event={"ID":"47705ce6-ef81-47a2-bcd3-a10b7bb9317a","Type":"ContainerStarted","Data":"54a4aabc1ecd39d8fe8f91c6ca43311380a066a0c0ac1adbf7d8c236f6c5ce2c"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.410212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" event={"ID":"9d8ca7ab-f667-423c-926e-a9e2cfc10c1b","Type":"ContainerStarted","Data":"786636c3530186e94877e769461e5ac98209b42e40f1ca78e605bb12d327c7dc"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.410314 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.412416 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca","Type":"ContainerStarted","Data":"7d6bb00ca9d2b31f3592817156848b6d8ba43d8530e8a37157215ee8138cd9cc"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.412496 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.419357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"46ee23a2-1b37-42e7-899f-5c1c70a6755b","Type":"ContainerStarted","Data":"9be40f1620b99781791d234f2a25ad29755f5e508a2a4f61dbbfd15192c3d52f"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.420375 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.426510 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" podStartSLOduration=2.439144294 podStartE2EDuration="5.42641362s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:13.236261986 +0000 UTC m=+801.026927928" lastFinishedPulling="2026-02-19 19:32:16.223531302 +0000 UTC m=+804.014197254" observedRunningTime="2026-02-19 19:32:17.419105337 +0000 UTC m=+805.209771279" watchObservedRunningTime="2026-02-19 19:32:17.42641362 +0000 UTC m=+805.217079592" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.427444 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" event={"ID":"ffe2a444-f47e-4193-b322-5943bf473b44","Type":"ContainerStarted","Data":"ded77ccd491a60d6dc17e7432761e9f3904cb9485ac20c9b6d38096391ceb805"} Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.465098 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" podStartSLOduration=2.229618976 podStartE2EDuration="5.465070687s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:13.014849492 +0000 UTC m=+800.805515434" lastFinishedPulling="2026-02-19 19:32:16.250301203 +0000 UTC m=+804.040967145" observedRunningTime="2026-02-19 19:32:17.44013461 +0000 UTC m=+805.230800592" watchObservedRunningTime="2026-02-19 19:32:17.465070687 +0000 UTC m=+805.255736629" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.469644 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.241815909 podStartE2EDuration="5.46963233s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:14.038506248 +0000 UTC m=+801.829172190" lastFinishedPulling="2026-02-19 19:32:16.266322669 +0000 UTC m=+804.056988611" observedRunningTime="2026-02-19 19:32:17.458344771 +0000 UTC m=+805.249010733" watchObservedRunningTime="2026-02-19 19:32:17.46963233 +0000 UTC m=+805.260298272" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.495839 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.198113214 podStartE2EDuration="5.495813813s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:13.943686754 +0000 UTC m=+801.734352696" lastFinishedPulling="2026-02-19 19:32:16.241387303 +0000 UTC m=+804.032053295" observedRunningTime="2026-02-19 19:32:17.490792006 +0000 UTC m=+805.281457958" watchObservedRunningTime="2026-02-19 19:32:17.495813813 +0000 UTC m=+805.286479775" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.512698 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" podStartSLOduration=3.078152163 podStartE2EDuration="6.512673964s" podCreationTimestamp="2026-02-19 19:32:11 +0000 UTC" firstStartedPulling="2026-02-19 19:32:12.806962024 +0000 UTC m=+800.597627966" lastFinishedPulling="2026-02-19 19:32:16.241483825 +0000 UTC m=+804.032149767" observedRunningTime="2026-02-19 19:32:17.509301696 +0000 UTC m=+805.299967638" watchObservedRunningTime="2026-02-19 19:32:17.512673964 +0000 UTC m=+805.303339916" Feb 19 19:32:17 crc kubenswrapper[4787]: I0219 19:32:17.534764 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.5136294 podStartE2EDuration="5.534740817s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:14.311239927 +0000 UTC m=+802.101905859" lastFinishedPulling="2026-02-19 19:32:16.332351334 +0000 UTC m=+804.123017276" observedRunningTime="2026-02-19 19:32:17.528253178 +0000 UTC m=+805.318919120" watchObservedRunningTime="2026-02-19 19:32:17.534740817 +0000 UTC m=+805.325406769" Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.435981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" event={"ID":"ffe2a444-f47e-4193-b322-5943bf473b44","Type":"ContainerStarted","Data":"4a05fca99ca9ff5cc55551979d1a422539e8bf7a33ae4906ca059940e6d445c0"} Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.436420 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.438205 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": dial tcp 10.217.0.55:8083: connect: connection refused" start-of-body= Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.438260 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": dial tcp 10.217.0.55:8083: connect: connection refused" Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.438712 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" event={"ID":"47705ce6-ef81-47a2-bcd3-a10b7bb9317a","Type":"ContainerStarted","Data":"031fedf0bd8cd5b3fb8f9048ecb6564dc95284aac5f0bcb3a298d3ef2d39652b"} Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.461480 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podStartSLOduration=2.039378633 podStartE2EDuration="6.461464197s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:13.816765995 +0000 UTC m=+801.607431937" lastFinishedPulling="2026-02-19 19:32:18.238851559 +0000 UTC m=+806.029517501" observedRunningTime="2026-02-19 19:32:18.455668349 +0000 UTC m=+806.246334291" watchObservedRunningTime="2026-02-19 19:32:18.461464197 +0000 UTC m=+806.252130139" Feb 19 19:32:18 crc kubenswrapper[4787]: I0219 19:32:18.479052 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podStartSLOduration=2.075126355 podStartE2EDuration="6.479022509s" podCreationTimestamp="2026-02-19 19:32:12 +0000 UTC" firstStartedPulling="2026-02-19 19:32:13.830268419 +0000 UTC m=+801.620934361" lastFinishedPulling="2026-02-19 19:32:18.234164573 +0000 UTC m=+806.024830515" observedRunningTime="2026-02-19 19:32:18.478533165 +0000 UTC m=+806.269199127" watchObservedRunningTime="2026-02-19 19:32:18.479022509 +0000 UTC m=+806.269688491" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.445336 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.445754 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.445775 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.453551 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.457647 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.459120 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" Feb 19 19:32:19 crc kubenswrapper[4787]: I0219 19:32:19.465909 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" Feb 19 19:32:32 crc kubenswrapper[4787]: I0219 19:32:32.332225 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" Feb 19 19:32:32 crc kubenswrapper[4787]: I0219 19:32:32.554075 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" Feb 19 19:32:32 crc kubenswrapper[4787]: I0219 19:32:32.794176 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" Feb 19 19:32:33 crc kubenswrapper[4787]: I0219 19:32:33.485168 4787 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 19 19:32:33 crc kubenswrapper[4787]: I0219 19:32:33.485847 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="74b9f2e5-3b9f-4af9-990f-147a1c6f8943" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 19:32:33 crc kubenswrapper[4787]: I0219 19:32:33.632078 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 19 19:32:33 crc kubenswrapper[4787]: I0219 19:32:33.830881 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 19 19:32:43 crc kubenswrapper[4787]: I0219 19:32:43.481988 4787 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 19 19:32:43 crc kubenswrapper[4787]: I0219 19:32:43.482593 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="74b9f2e5-3b9f-4af9-990f-147a1c6f8943" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 19:32:53 crc kubenswrapper[4787]: I0219 19:32:53.481318 4787 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 19 19:32:53 crc kubenswrapper[4787]: I0219 19:32:53.481882 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="74b9f2e5-3b9f-4af9-990f-147a1c6f8943" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 19:33:03 crc kubenswrapper[4787]: I0219 19:33:03.483599 4787 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 19 19:33:03 crc kubenswrapper[4787]: I0219 19:33:03.484178 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="74b9f2e5-3b9f-4af9-990f-147a1c6f8943" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 19:33:13 crc kubenswrapper[4787]: I0219 19:33:13.482286 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.737947 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-74tdr"] Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.740069 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.742069 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.743667 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.744003 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.744133 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.745759 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-kntzs" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.752942 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.766145 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-74tdr"] Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.792454 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-74tdr"] Feb 19 19:33:29 crc kubenswrapper[4787]: E0219 19:33:29.793064 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-qx65z metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-74tdr" podUID="aab5cbc4-365a-48ed-a67e-f3926604e1de" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.865954 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aab5cbc4-365a-48ed-a67e-f3926604e1de-tmp\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866022 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-token\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866148 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx65z\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-kube-api-access-qx65z\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866171 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-syslog-receiver\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866196 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config-openshift-service-cacrt\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866334 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-sa-token\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aab5cbc4-365a-48ed-a67e-f3926604e1de-datadir\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866427 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-entrypoint\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866454 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.866470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-trusted-ca\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967731 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-token\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967772 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967792 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx65z\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-kube-api-access-qx65z\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967810 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-syslog-receiver\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967832 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config-openshift-service-cacrt\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967877 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-sa-token\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aab5cbc4-365a-48ed-a67e-f3926604e1de-datadir\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967929 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-entrypoint\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967954 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967967 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-trusted-ca\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.967990 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aab5cbc4-365a-48ed-a67e-f3926604e1de-tmp\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: E0219 19:33:29.968599 4787 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.968653 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: E0219 19:33:29.968699 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics podName:aab5cbc4-365a-48ed-a67e-f3926604e1de nodeName:}" failed. No retries permitted until 2026-02-19 19:33:30.468678812 +0000 UTC m=+878.259344794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics") pod "collector-74tdr" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de") : secret "collector-metrics" not found Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.968893 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aab5cbc4-365a-48ed-a67e-f3926604e1de-datadir\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.969344 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-entrypoint\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.969419 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config-openshift-service-cacrt\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.969721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-trusted-ca\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.978688 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-token\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.979343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aab5cbc4-365a-48ed-a67e-f3926604e1de-tmp\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.982503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-syslog-receiver\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.988281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-sa-token\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:29 crc kubenswrapper[4787]: I0219 19:33:29.999185 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx65z\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-kube-api-access-qx65z\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.015023 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-74tdr" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.045165 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-74tdr" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.171650 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-entrypoint\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172016 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172038 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-trusted-ca\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172107 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172245 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aab5cbc4-365a-48ed-a67e-f3926604e1de-datadir" (OuterVolumeSpecName: "datadir") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172506 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config" (OuterVolumeSpecName: "config") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172529 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172560 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aab5cbc4-365a-48ed-a67e-f3926604e1de-datadir\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.172672 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aab5cbc4-365a-48ed-a67e-f3926604e1de-tmp\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.173049 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.173301 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config-openshift-service-cacrt\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.173425 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-token\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.173450 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-syslog-receiver\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.173503 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx65z\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-kube-api-access-qx65z\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.173538 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-sa-token\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.174062 4787 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.174083 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.174094 4787 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aab5cbc4-365a-48ed-a67e-f3926604e1de-datadir\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.174105 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.174117 4787 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aab5cbc4-365a-48ed-a67e-f3926604e1de-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.175744 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab5cbc4-365a-48ed-a67e-f3926604e1de-tmp" (OuterVolumeSpecName: "tmp") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.176867 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-sa-token" (OuterVolumeSpecName: "sa-token") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.177409 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-kube-api-access-qx65z" (OuterVolumeSpecName: "kube-api-access-qx65z") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "kube-api-access-qx65z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.177442 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-token" (OuterVolumeSpecName: "collector-token") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.182480 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.276219 4787 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aab5cbc4-365a-48ed-a67e-f3926604e1de-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.276266 4787 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.276281 4787 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.276292 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx65z\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-kube-api-access-qx65z\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.276302 4787 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aab5cbc4-365a-48ed-a67e-f3926604e1de-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.481123 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.503421 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics\") pod \"collector-74tdr\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " pod="openshift-logging/collector-74tdr" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.683988 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics\") pod \"aab5cbc4-365a-48ed-a67e-f3926604e1de\" (UID: \"aab5cbc4-365a-48ed-a67e-f3926604e1de\") " Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.686706 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics" (OuterVolumeSpecName: "metrics") pod "aab5cbc4-365a-48ed-a67e-f3926604e1de" (UID: "aab5cbc4-365a-48ed-a67e-f3926604e1de"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:33:30 crc kubenswrapper[4787]: I0219 19:33:30.786407 4787 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aab5cbc4-365a-48ed-a67e-f3926604e1de-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.020879 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-74tdr" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.058251 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-74tdr"] Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.072110 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-74tdr"] Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.079049 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-h749t"] Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.080211 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.082711 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.082799 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-kntzs" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.083370 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.083688 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.083710 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.088785 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.091912 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-h749t"] Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.191982 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-tmp\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192021 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-metrics\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192057 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xv4\" (UniqueName: \"kubernetes.io/projected/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-kube-api-access-j4xv4\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192077 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-trusted-ca\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192096 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-config\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192119 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-config-openshift-service-cacrt\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192141 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-collector-token\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192157 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-collector-syslog-receiver\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192173 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-entrypoint\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192190 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-datadir\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.192207 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-sa-token\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293297 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-config-openshift-service-cacrt\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293346 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-collector-token\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293370 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-collector-syslog-receiver\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293389 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-entrypoint\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293408 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-datadir\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293423 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-sa-token\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293645 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-datadir\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.293984 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-tmp\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-metrics\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294037 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4xv4\" (UniqueName: \"kubernetes.io/projected/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-kube-api-access-j4xv4\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294059 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-trusted-ca\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294067 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-config-openshift-service-cacrt\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294074 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-config\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-entrypoint\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.294779 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-config\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.295337 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-trusted-ca\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.305150 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-tmp\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.305217 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-collector-syslog-receiver\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.305452 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-metrics\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.305785 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-collector-token\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.309534 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-sa-token\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.322450 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4xv4\" (UniqueName: \"kubernetes.io/projected/5b05acd3-34d5-4c6c-a559-2ec0d39761c9-kube-api-access-j4xv4\") pod \"collector-h749t\" (UID: \"5b05acd3-34d5-4c6c-a559-2ec0d39761c9\") " pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.399094 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-h749t" Feb 19 19:33:31 crc kubenswrapper[4787]: I0219 19:33:31.820965 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-h749t"] Feb 19 19:33:32 crc kubenswrapper[4787]: I0219 19:33:32.026739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-h749t" event={"ID":"5b05acd3-34d5-4c6c-a559-2ec0d39761c9","Type":"ContainerStarted","Data":"d37f7ca88a7ba46cabb2519d5871b87d130f78385fba1b24b912ca8b6f17eb89"} Feb 19 19:33:32 crc kubenswrapper[4787]: I0219 19:33:32.900671 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab5cbc4-365a-48ed-a67e-f3926604e1de" path="/var/lib/kubelet/pods/aab5cbc4-365a-48ed-a67e-f3926604e1de/volumes" Feb 19 19:33:39 crc kubenswrapper[4787]: I0219 19:33:39.080286 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-h749t" event={"ID":"5b05acd3-34d5-4c6c-a559-2ec0d39761c9","Type":"ContainerStarted","Data":"56d0c51ed4552a69d62536d91c7572342d1e1375d161a5499dbd861c9e105930"} Feb 19 19:33:39 crc kubenswrapper[4787]: I0219 19:33:39.098259 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-h749t" podStartSLOduration=1.2278847449999999 podStartE2EDuration="8.098238099s" podCreationTimestamp="2026-02-19 19:33:31 +0000 UTC" firstStartedPulling="2026-02-19 19:33:31.828745493 +0000 UTC m=+879.619411455" lastFinishedPulling="2026-02-19 19:33:38.699098867 +0000 UTC m=+886.489764809" observedRunningTime="2026-02-19 19:33:39.097046905 +0000 UTC m=+886.887712867" watchObservedRunningTime="2026-02-19 19:33:39.098238099 +0000 UTC m=+886.888904041" Feb 19 19:33:39 crc kubenswrapper[4787]: I0219 19:33:39.263054 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:33:39 crc kubenswrapper[4787]: I0219 19:33:39.263127 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:34:09 crc kubenswrapper[4787]: I0219 19:34:09.263256 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:34:09 crc kubenswrapper[4787]: I0219 19:34:09.265827 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.260848 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md"] Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.263077 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.267002 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.269324 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md"] Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.385666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.386652 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb4n\" (UniqueName: \"kubernetes.io/projected/10a11b8e-3ef6-4880-8c24-b4d760a6241a-kube-api-access-8cb4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.386779 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.487370 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.487472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.487535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb4n\" (UniqueName: \"kubernetes.io/projected/10a11b8e-3ef6-4880-8c24-b4d760a6241a-kube-api-access-8cb4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.487981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.488038 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.512442 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb4n\" (UniqueName: \"kubernetes.io/projected/10a11b8e-3ef6-4880-8c24-b4d760a6241a-kube-api-access-8cb4n\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:12 crc kubenswrapper[4787]: I0219 19:34:12.586229 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:13 crc kubenswrapper[4787]: I0219 19:34:13.047639 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md"] Feb 19 19:34:13 crc kubenswrapper[4787]: I0219 19:34:13.323802 4787 generic.go:334] "Generic (PLEG): container finished" podID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerID="e7933fe50e2b70cd0d7e89c16b111843196837e030c4506f3eb89fec3e04f501" exitCode=0 Feb 19 19:34:13 crc kubenswrapper[4787]: I0219 19:34:13.323877 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" event={"ID":"10a11b8e-3ef6-4880-8c24-b4d760a6241a","Type":"ContainerDied","Data":"e7933fe50e2b70cd0d7e89c16b111843196837e030c4506f3eb89fec3e04f501"} Feb 19 19:34:13 crc kubenswrapper[4787]: I0219 19:34:13.323960 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" event={"ID":"10a11b8e-3ef6-4880-8c24-b4d760a6241a","Type":"ContainerStarted","Data":"48d55018d7ea6c1953d247d00b3ffdc5c901f05163a2dcee81e9a7ca4b529d35"} Feb 19 19:34:15 crc kubenswrapper[4787]: I0219 19:34:15.339134 4787 generic.go:334] "Generic (PLEG): container finished" podID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerID="b867d3e251f7c17bb38ae289ec6d7ea044628d4024a0c223857452b98a02db68" exitCode=0 Feb 19 19:34:15 crc kubenswrapper[4787]: I0219 19:34:15.339242 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" event={"ID":"10a11b8e-3ef6-4880-8c24-b4d760a6241a","Type":"ContainerDied","Data":"b867d3e251f7c17bb38ae289ec6d7ea044628d4024a0c223857452b98a02db68"} Feb 19 19:34:16 crc kubenswrapper[4787]: I0219 19:34:16.348178 4787 generic.go:334] "Generic (PLEG): container finished" podID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerID="50c361b33bf8858084a31ed69c44f5ce7e9bfbe51cf6883f580f11a092aae652" exitCode=0 Feb 19 19:34:16 crc kubenswrapper[4787]: I0219 19:34:16.348250 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" event={"ID":"10a11b8e-3ef6-4880-8c24-b4d760a6241a","Type":"ContainerDied","Data":"50c361b33bf8858084a31ed69c44f5ce7e9bfbe51cf6883f580f11a092aae652"} Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.627010 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fhcsv"] Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.636721 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.646114 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhcsv"] Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.695061 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.763921 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbjnx\" (UniqueName: \"kubernetes.io/projected/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-kube-api-access-sbjnx\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.764058 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-utilities\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.764082 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-catalog-content\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.864984 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-util\") pod \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.865072 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cb4n\" (UniqueName: \"kubernetes.io/projected/10a11b8e-3ef6-4880-8c24-b4d760a6241a-kube-api-access-8cb4n\") pod \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.865183 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-bundle\") pod \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\" (UID: \"10a11b8e-3ef6-4880-8c24-b4d760a6241a\") " Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.865391 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-utilities\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.865420 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-catalog-content\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.865468 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbjnx\" (UniqueName: \"kubernetes.io/projected/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-kube-api-access-sbjnx\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.866436 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-utilities\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.866462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-catalog-content\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.866505 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-bundle" (OuterVolumeSpecName: "bundle") pod "10a11b8e-3ef6-4880-8c24-b4d760a6241a" (UID: "10a11b8e-3ef6-4880-8c24-b4d760a6241a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.882778 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-util" (OuterVolumeSpecName: "util") pod "10a11b8e-3ef6-4880-8c24-b4d760a6241a" (UID: "10a11b8e-3ef6-4880-8c24-b4d760a6241a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.882826 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a11b8e-3ef6-4880-8c24-b4d760a6241a-kube-api-access-8cb4n" (OuterVolumeSpecName: "kube-api-access-8cb4n") pod "10a11b8e-3ef6-4880-8c24-b4d760a6241a" (UID: "10a11b8e-3ef6-4880-8c24-b4d760a6241a"). InnerVolumeSpecName "kube-api-access-8cb4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.889820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbjnx\" (UniqueName: \"kubernetes.io/projected/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-kube-api-access-sbjnx\") pod \"community-operators-fhcsv\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.966727 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.966979 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10a11b8e-3ef6-4880-8c24-b4d760a6241a-util\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:17 crc kubenswrapper[4787]: I0219 19:34:17.966988 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cb4n\" (UniqueName: \"kubernetes.io/projected/10a11b8e-3ef6-4880-8c24-b4d760a6241a-kube-api-access-8cb4n\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:18 crc kubenswrapper[4787]: I0219 19:34:18.003674 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:18 crc kubenswrapper[4787]: I0219 19:34:18.363893 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" event={"ID":"10a11b8e-3ef6-4880-8c24-b4d760a6241a","Type":"ContainerDied","Data":"48d55018d7ea6c1953d247d00b3ffdc5c901f05163a2dcee81e9a7ca4b529d35"} Feb 19 19:34:18 crc kubenswrapper[4787]: I0219 19:34:18.363931 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48d55018d7ea6c1953d247d00b3ffdc5c901f05163a2dcee81e9a7ca4b529d35" Feb 19 19:34:18 crc kubenswrapper[4787]: I0219 19:34:18.363929 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md" Feb 19 19:34:18 crc kubenswrapper[4787]: I0219 19:34:18.447334 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhcsv"] Feb 19 19:34:19 crc kubenswrapper[4787]: I0219 19:34:19.375446 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerID="5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8" exitCode=0 Feb 19 19:34:19 crc kubenswrapper[4787]: I0219 19:34:19.375714 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhcsv" event={"ID":"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb","Type":"ContainerDied","Data":"5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8"} Feb 19 19:34:19 crc kubenswrapper[4787]: I0219 19:34:19.375737 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhcsv" event={"ID":"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb","Type":"ContainerStarted","Data":"2bb475517828158b52230819342a76313c8682126c085e26dfda30c8e2ef487e"} Feb 19 19:34:21 crc kubenswrapper[4787]: I0219 19:34:21.391885 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerID="ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889" exitCode=0 Feb 19 19:34:21 crc kubenswrapper[4787]: I0219 19:34:21.391972 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhcsv" event={"ID":"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb","Type":"ContainerDied","Data":"ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889"} Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.401704 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhcsv" event={"ID":"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb","Type":"ContainerStarted","Data":"ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79"} Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.422812 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fhcsv" podStartSLOduration=3.045660532 podStartE2EDuration="5.422791091s" podCreationTimestamp="2026-02-19 19:34:17 +0000 UTC" firstStartedPulling="2026-02-19 19:34:19.377174193 +0000 UTC m=+927.167840135" lastFinishedPulling="2026-02-19 19:34:21.754304752 +0000 UTC m=+929.544970694" observedRunningTime="2026-02-19 19:34:22.418374743 +0000 UTC m=+930.209040695" watchObservedRunningTime="2026-02-19 19:34:22.422791091 +0000 UTC m=+930.213457033" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.692655 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bkxn5"] Feb 19 19:34:22 crc kubenswrapper[4787]: E0219 19:34:22.693189 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="pull" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.693206 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="pull" Feb 19 19:34:22 crc kubenswrapper[4787]: E0219 19:34:22.693220 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="util" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.693227 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="util" Feb 19 19:34:22 crc kubenswrapper[4787]: E0219 19:34:22.693242 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="extract" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.693248 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="extract" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.693376 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a11b8e-3ef6-4880-8c24-b4d760a6241a" containerName="extract" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.694782 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.697367 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.697558 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.698954 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-588bt" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.715869 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bkxn5"] Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.840760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2j5r\" (UniqueName: \"kubernetes.io/projected/5d632d9f-7b63-4c37-b21a-a8053bb0922e-kube-api-access-v2j5r\") pod \"nmstate-operator-694c9596b7-bkxn5\" (UID: \"5d632d9f-7b63-4c37-b21a-a8053bb0922e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.942352 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2j5r\" (UniqueName: \"kubernetes.io/projected/5d632d9f-7b63-4c37-b21a-a8053bb0922e-kube-api-access-v2j5r\") pod \"nmstate-operator-694c9596b7-bkxn5\" (UID: \"5d632d9f-7b63-4c37-b21a-a8053bb0922e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" Feb 19 19:34:22 crc kubenswrapper[4787]: I0219 19:34:22.967783 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2j5r\" (UniqueName: \"kubernetes.io/projected/5d632d9f-7b63-4c37-b21a-a8053bb0922e-kube-api-access-v2j5r\") pod \"nmstate-operator-694c9596b7-bkxn5\" (UID: \"5d632d9f-7b63-4c37-b21a-a8053bb0922e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" Feb 19 19:34:23 crc kubenswrapper[4787]: I0219 19:34:23.012548 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" Feb 19 19:34:23 crc kubenswrapper[4787]: I0219 19:34:23.560245 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bkxn5"] Feb 19 19:34:23 crc kubenswrapper[4787]: W0219 19:34:23.560707 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d632d9f_7b63_4c37_b21a_a8053bb0922e.slice/crio-9a14c88009d770c9be1904519ee87fd148e8da74fc5cb294ea68ffd8363161dc WatchSource:0}: Error finding container 9a14c88009d770c9be1904519ee87fd148e8da74fc5cb294ea68ffd8363161dc: Status 404 returned error can't find the container with id 9a14c88009d770c9be1904519ee87fd148e8da74fc5cb294ea68ffd8363161dc Feb 19 19:34:24 crc kubenswrapper[4787]: I0219 19:34:24.415136 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" event={"ID":"5d632d9f-7b63-4c37-b21a-a8053bb0922e","Type":"ContainerStarted","Data":"9a14c88009d770c9be1904519ee87fd148e8da74fc5cb294ea68ffd8363161dc"} Feb 19 19:34:26 crc kubenswrapper[4787]: I0219 19:34:26.433266 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" event={"ID":"5d632d9f-7b63-4c37-b21a-a8053bb0922e","Type":"ContainerStarted","Data":"762760788a315d2c4b9d3541a4c9bf6c69398bc8f3759753cd33c4ce94b8ec63"} Feb 19 19:34:28 crc kubenswrapper[4787]: I0219 19:34:28.005376 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:28 crc kubenswrapper[4787]: I0219 19:34:28.005416 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:28 crc kubenswrapper[4787]: I0219 19:34:28.065678 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:28 crc kubenswrapper[4787]: I0219 19:34:28.090307 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-bkxn5" podStartSLOduration=4.039698096 podStartE2EDuration="6.090286228s" podCreationTimestamp="2026-02-19 19:34:22 +0000 UTC" firstStartedPulling="2026-02-19 19:34:23.563979964 +0000 UTC m=+931.354645946" lastFinishedPulling="2026-02-19 19:34:25.614568136 +0000 UTC m=+933.405234078" observedRunningTime="2026-02-19 19:34:26.458818552 +0000 UTC m=+934.249484514" watchObservedRunningTime="2026-02-19 19:34:28.090286228 +0000 UTC m=+935.880952180" Feb 19 19:34:28 crc kubenswrapper[4787]: I0219 19:34:28.495461 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:30 crc kubenswrapper[4787]: I0219 19:34:30.020455 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhcsv"] Feb 19 19:34:30 crc kubenswrapper[4787]: I0219 19:34:30.464389 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fhcsv" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="registry-server" containerID="cri-o://ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79" gracePeriod=2 Feb 19 19:34:30 crc kubenswrapper[4787]: I0219 19:34:30.903642 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.090671 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-catalog-content\") pod \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.090772 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbjnx\" (UniqueName: \"kubernetes.io/projected/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-kube-api-access-sbjnx\") pod \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.090825 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-utilities\") pod \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\" (UID: \"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb\") " Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.091815 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-utilities" (OuterVolumeSpecName: "utilities") pod "0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" (UID: "0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.104792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-kube-api-access-sbjnx" (OuterVolumeSpecName: "kube-api-access-sbjnx") pod "0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" (UID: "0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb"). InnerVolumeSpecName "kube-api-access-sbjnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.151146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" (UID: "0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.192661 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbjnx\" (UniqueName: \"kubernetes.io/projected/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-kube-api-access-sbjnx\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.192703 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.192719 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.475284 4787 generic.go:334] "Generic (PLEG): container finished" podID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerID="ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79" exitCode=0 Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.475347 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhcsv" event={"ID":"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb","Type":"ContainerDied","Data":"ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79"} Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.475383 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhcsv" event={"ID":"0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb","Type":"ContainerDied","Data":"2bb475517828158b52230819342a76313c8682126c085e26dfda30c8e2ef487e"} Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.475407 4787 scope.go:117] "RemoveContainer" containerID="ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.475415 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhcsv" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.507314 4787 scope.go:117] "RemoveContainer" containerID="ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.515048 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhcsv"] Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.522785 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fhcsv"] Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.553176 4787 scope.go:117] "RemoveContainer" containerID="5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.571102 4787 scope.go:117] "RemoveContainer" containerID="ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79" Feb 19 19:34:31 crc kubenswrapper[4787]: E0219 19:34:31.571715 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79\": container with ID starting with ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79 not found: ID does not exist" containerID="ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.571772 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79"} err="failed to get container status \"ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79\": rpc error: code = NotFound desc = could not find container \"ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79\": container with ID starting with ce18b2ffdf9562136adfa2460d193c6ef7001a2d2bbd169bfd32656226b25f79 not found: ID does not exist" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.571806 4787 scope.go:117] "RemoveContainer" containerID="ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889" Feb 19 19:34:31 crc kubenswrapper[4787]: E0219 19:34:31.572316 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889\": container with ID starting with ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889 not found: ID does not exist" containerID="ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.572411 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889"} err="failed to get container status \"ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889\": rpc error: code = NotFound desc = could not find container \"ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889\": container with ID starting with ce433f4d52c5ccc97400f9a19a534b729725100c21d90542a6df24ce5fb37889 not found: ID does not exist" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.572495 4787 scope.go:117] "RemoveContainer" containerID="5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8" Feb 19 19:34:31 crc kubenswrapper[4787]: E0219 19:34:31.572815 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8\": container with ID starting with 5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8 not found: ID does not exist" containerID="5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8" Feb 19 19:34:31 crc kubenswrapper[4787]: I0219 19:34:31.572851 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8"} err="failed to get container status \"5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8\": rpc error: code = NotFound desc = could not find container \"5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8\": container with ID starting with 5eda646d3dbee75ec2aaa5931b3d3a28e702ddcc96e48a7e54b1d4f7af4e3fc8 not found: ID does not exist" Feb 19 19:34:32 crc kubenswrapper[4787]: I0219 19:34:32.940835 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" path="/var/lib/kubelet/pods/0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb/volumes" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.980449 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn"] Feb 19 19:34:33 crc kubenswrapper[4787]: E0219 19:34:33.980982 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="registry-server" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.980994 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="registry-server" Feb 19 19:34:33 crc kubenswrapper[4787]: E0219 19:34:33.981007 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="extract-utilities" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.981013 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="extract-utilities" Feb 19 19:34:33 crc kubenswrapper[4787]: E0219 19:34:33.981027 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="extract-content" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.981033 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="extract-content" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.981162 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3fe43e-eb8f-4cd3-a4ec-638ea56b64eb" containerName="registry-server" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.981994 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" Feb 19 19:34:33 crc kubenswrapper[4787]: I0219 19:34:33.984422 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hrsh6" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.005453 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.006756 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.007579 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.008453 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.020645 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5qk4m"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.021513 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.027360 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.114517 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.115349 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.118950 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.120754 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.121460 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-t6kdc" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.129252 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.157744 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7vv\" (UniqueName: \"kubernetes.io/projected/94194c14-c7cd-4b05-bda1-74ea911cd6cf-kube-api-access-vn7vv\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.157816 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ace5ef3f-b2ed-4d41-a085-4c662e70061b-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6sjw5\" (UID: \"ace5ef3f-b2ed-4d41-a085-4c662e70061b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.157879 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sszb\" (UniqueName: \"kubernetes.io/projected/ace5ef3f-b2ed-4d41-a085-4c662e70061b-kube-api-access-6sszb\") pod \"nmstate-webhook-866bcb46dc-6sjw5\" (UID: \"ace5ef3f-b2ed-4d41-a085-4c662e70061b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.157915 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-ovs-socket\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.157951 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a07ff2fe-5085-4c1a-8139-4a47329c88bc-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.157982 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clz5\" (UniqueName: \"kubernetes.io/projected/b771e6b6-cd00-431a-84fb-970db07534bd-kube-api-access-6clz5\") pod \"nmstate-metrics-58c85c668d-h6fxn\" (UID: \"b771e6b6-cd00-431a-84fb-970db07534bd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.158011 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-dbus-socket\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.158044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a07ff2fe-5085-4c1a-8139-4a47329c88bc-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.158070 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-nmstate-lock\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.158091 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzctd\" (UniqueName: \"kubernetes.io/projected/a07ff2fe-5085-4c1a-8139-4a47329c88bc-kube-api-access-xzctd\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259237 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sszb\" (UniqueName: \"kubernetes.io/projected/ace5ef3f-b2ed-4d41-a085-4c662e70061b-kube-api-access-6sszb\") pod \"nmstate-webhook-866bcb46dc-6sjw5\" (UID: \"ace5ef3f-b2ed-4d41-a085-4c662e70061b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259288 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-ovs-socket\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259320 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a07ff2fe-5085-4c1a-8139-4a47329c88bc-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clz5\" (UniqueName: \"kubernetes.io/projected/b771e6b6-cd00-431a-84fb-970db07534bd-kube-api-access-6clz5\") pod \"nmstate-metrics-58c85c668d-h6fxn\" (UID: \"b771e6b6-cd00-431a-84fb-970db07534bd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259368 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-dbus-socket\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259393 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a07ff2fe-5085-4c1a-8139-4a47329c88bc-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259410 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-nmstate-lock\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzctd\" (UniqueName: \"kubernetes.io/projected/a07ff2fe-5085-4c1a-8139-4a47329c88bc-kube-api-access-xzctd\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259449 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7vv\" (UniqueName: \"kubernetes.io/projected/94194c14-c7cd-4b05-bda1-74ea911cd6cf-kube-api-access-vn7vv\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ace5ef3f-b2ed-4d41-a085-4c662e70061b-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6sjw5\" (UID: \"ace5ef3f-b2ed-4d41-a085-4c662e70061b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259497 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-ovs-socket\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259644 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-nmstate-lock\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.259752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/94194c14-c7cd-4b05-bda1-74ea911cd6cf-dbus-socket\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.260263 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a07ff2fe-5085-4c1a-8139-4a47329c88bc-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.267428 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a07ff2fe-5085-4c1a-8139-4a47329c88bc-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.267915 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ace5ef3f-b2ed-4d41-a085-4c662e70061b-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-6sjw5\" (UID: \"ace5ef3f-b2ed-4d41-a085-4c662e70061b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.291377 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clz5\" (UniqueName: \"kubernetes.io/projected/b771e6b6-cd00-431a-84fb-970db07534bd-kube-api-access-6clz5\") pod \"nmstate-metrics-58c85c668d-h6fxn\" (UID: \"b771e6b6-cd00-431a-84fb-970db07534bd\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.292435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7vv\" (UniqueName: \"kubernetes.io/projected/94194c14-c7cd-4b05-bda1-74ea911cd6cf-kube-api-access-vn7vv\") pod \"nmstate-handler-5qk4m\" (UID: \"94194c14-c7cd-4b05-bda1-74ea911cd6cf\") " pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.293984 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzctd\" (UniqueName: \"kubernetes.io/projected/a07ff2fe-5085-4c1a-8139-4a47329c88bc-kube-api-access-xzctd\") pod \"nmstate-console-plugin-5c78fc5d65-58bxh\" (UID: \"a07ff2fe-5085-4c1a-8139-4a47329c88bc\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.303308 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sszb\" (UniqueName: \"kubernetes.io/projected/ace5ef3f-b2ed-4d41-a085-4c662e70061b-kube-api-access-6sszb\") pod \"nmstate-webhook-866bcb46dc-6sjw5\" (UID: \"ace5ef3f-b2ed-4d41-a085-4c662e70061b\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.324996 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bd66c8fd6-b6vcd"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.325900 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.337126 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd66c8fd6-b6vcd"] Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.358045 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-serving-cert\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361644 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-oauth-serving-cert\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361675 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-trusted-ca-bundle\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361711 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5crb\" (UniqueName: \"kubernetes.io/projected/59942447-448c-4d2e-b1ac-fe695185fc0e-kube-api-access-b5crb\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361737 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-console-config\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-oauth-config\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.361794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-service-ca\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.380936 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.399725 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.430552 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463510 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-serving-cert\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463561 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-oauth-serving-cert\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463598 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-trusted-ca-bundle\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463659 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5crb\" (UniqueName: \"kubernetes.io/projected/59942447-448c-4d2e-b1ac-fe695185fc0e-kube-api-access-b5crb\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463693 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-console-config\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463721 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-oauth-config\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.463766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-service-ca\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.464725 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-service-ca\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.465756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-oauth-serving-cert\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.468117 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-trusted-ca-bundle\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.468691 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-console-config\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.470254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-serving-cert\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.471374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-oauth-config\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.489912 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5crb\" (UniqueName: \"kubernetes.io/projected/59942447-448c-4d2e-b1ac-fe695185fc0e-kube-api-access-b5crb\") pod \"console-bd66c8fd6-b6vcd\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.515117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5qk4m" event={"ID":"94194c14-c7cd-4b05-bda1-74ea911cd6cf","Type":"ContainerStarted","Data":"abc204cdda3f27223a1a7eb02b518c1aa0d033c3e9da5a9af5eca2dae3fd5f36"} Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.684855 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5"] Feb 19 19:34:34 crc kubenswrapper[4787]: W0219 19:34:34.709804 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace5ef3f_b2ed_4d41_a085_4c662e70061b.slice/crio-61372c80b24d04966ff6331972a13418da4afe956a456c4b997ae92795b61f72 WatchSource:0}: Error finding container 61372c80b24d04966ff6331972a13418da4afe956a456c4b997ae92795b61f72: Status 404 returned error can't find the container with id 61372c80b24d04966ff6331972a13418da4afe956a456c4b997ae92795b61f72 Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.769157 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.810078 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn"] Feb 19 19:34:34 crc kubenswrapper[4787]: W0219 19:34:34.819466 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb771e6b6_cd00_431a_84fb_970db07534bd.slice/crio-85056994bb2432cce610211019c73b1a7d179519f1c8bdfe75bf5c238dee7afa WatchSource:0}: Error finding container 85056994bb2432cce610211019c73b1a7d179519f1c8bdfe75bf5c238dee7afa: Status 404 returned error can't find the container with id 85056994bb2432cce610211019c73b1a7d179519f1c8bdfe75bf5c238dee7afa Feb 19 19:34:34 crc kubenswrapper[4787]: W0219 19:34:34.985503 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07ff2fe_5085_4c1a_8139_4a47329c88bc.slice/crio-76bade7fb745cba30428260edbb2f5bc78f286a53058443c2b2d86f73ea1eeba WatchSource:0}: Error finding container 76bade7fb745cba30428260edbb2f5bc78f286a53058443c2b2d86f73ea1eeba: Status 404 returned error can't find the container with id 76bade7fb745cba30428260edbb2f5bc78f286a53058443c2b2d86f73ea1eeba Feb 19 19:34:34 crc kubenswrapper[4787]: I0219 19:34:34.992222 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh"] Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.282693 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bd66c8fd6-b6vcd"] Feb 19 19:34:35 crc kubenswrapper[4787]: W0219 19:34:35.289173 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59942447_448c_4d2e_b1ac_fe695185fc0e.slice/crio-d80f07b146ec970efe0845f35e8e45dedfbf196fab8af95271167ad3985f41a5 WatchSource:0}: Error finding container d80f07b146ec970efe0845f35e8e45dedfbf196fab8af95271167ad3985f41a5: Status 404 returned error can't find the container with id d80f07b146ec970efe0845f35e8e45dedfbf196fab8af95271167ad3985f41a5 Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.542676 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" event={"ID":"ace5ef3f-b2ed-4d41-a085-4c662e70061b","Type":"ContainerStarted","Data":"61372c80b24d04966ff6331972a13418da4afe956a456c4b997ae92795b61f72"} Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.544310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd66c8fd6-b6vcd" event={"ID":"59942447-448c-4d2e-b1ac-fe695185fc0e","Type":"ContainerStarted","Data":"b5123fedff0152a767d5feeda526dcd30d31bcb07ccdf31903adad1ab4b38374"} Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.544334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd66c8fd6-b6vcd" event={"ID":"59942447-448c-4d2e-b1ac-fe695185fc0e","Type":"ContainerStarted","Data":"d80f07b146ec970efe0845f35e8e45dedfbf196fab8af95271167ad3985f41a5"} Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.545867 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" event={"ID":"a07ff2fe-5085-4c1a-8139-4a47329c88bc","Type":"ContainerStarted","Data":"76bade7fb745cba30428260edbb2f5bc78f286a53058443c2b2d86f73ea1eeba"} Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.547333 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" event={"ID":"b771e6b6-cd00-431a-84fb-970db07534bd","Type":"ContainerStarted","Data":"85056994bb2432cce610211019c73b1a7d179519f1c8bdfe75bf5c238dee7afa"} Feb 19 19:34:35 crc kubenswrapper[4787]: I0219 19:34:35.562050 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bd66c8fd6-b6vcd" podStartSLOduration=1.5620320300000001 podStartE2EDuration="1.56203203s" podCreationTimestamp="2026-02-19 19:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:34:35.56203335 +0000 UTC m=+943.352699312" watchObservedRunningTime="2026-02-19 19:34:35.56203203 +0000 UTC m=+943.352697972" Feb 19 19:34:37 crc kubenswrapper[4787]: I0219 19:34:37.562293 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" event={"ID":"a07ff2fe-5085-4c1a-8139-4a47329c88bc","Type":"ContainerStarted","Data":"4ca5200ed94e23ac854f4d2b193017894625e8a81ec4ac4cf0fff08b91f95c01"} Feb 19 19:34:37 crc kubenswrapper[4787]: I0219 19:34:37.564126 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" event={"ID":"b771e6b6-cd00-431a-84fb-970db07534bd","Type":"ContainerStarted","Data":"e645c367882f69a1fdd720a0d01727aa9cb860cdfdaceb9c890a9b831bcf4591"} Feb 19 19:34:37 crc kubenswrapper[4787]: I0219 19:34:37.565412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5qk4m" event={"ID":"94194c14-c7cd-4b05-bda1-74ea911cd6cf","Type":"ContainerStarted","Data":"9c2a17fed572ce41c9c5ccbb89e96a83358615bb63a5f11831b795ef291f1d6b"} Feb 19 19:34:37 crc kubenswrapper[4787]: I0219 19:34:37.565548 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:37 crc kubenswrapper[4787]: I0219 19:34:37.580848 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-58bxh" podStartSLOduration=1.366737467 podStartE2EDuration="3.580833088s" podCreationTimestamp="2026-02-19 19:34:34 +0000 UTC" firstStartedPulling="2026-02-19 19:34:35.001698572 +0000 UTC m=+942.792364514" lastFinishedPulling="2026-02-19 19:34:37.215794193 +0000 UTC m=+945.006460135" observedRunningTime="2026-02-19 19:34:37.577831131 +0000 UTC m=+945.368497063" watchObservedRunningTime="2026-02-19 19:34:37.580833088 +0000 UTC m=+945.371499030" Feb 19 19:34:37 crc kubenswrapper[4787]: I0219 19:34:37.609323 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5qk4m" podStartSLOduration=1.872583881 podStartE2EDuration="4.609304065s" podCreationTimestamp="2026-02-19 19:34:33 +0000 UTC" firstStartedPulling="2026-02-19 19:34:34.47807796 +0000 UTC m=+942.268743902" lastFinishedPulling="2026-02-19 19:34:37.214798134 +0000 UTC m=+945.005464086" observedRunningTime="2026-02-19 19:34:37.605923167 +0000 UTC m=+945.396589109" watchObservedRunningTime="2026-02-19 19:34:37.609304065 +0000 UTC m=+945.399970007" Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.262687 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.263307 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.263352 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.263965 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"908353e6e26c8eb14aa15cfd6585d127a5cf2fd790c45d696549088ebf5dab4a"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.264024 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://908353e6e26c8eb14aa15cfd6585d127a5cf2fd790c45d696549088ebf5dab4a" gracePeriod=600 Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.585223 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="908353e6e26c8eb14aa15cfd6585d127a5cf2fd790c45d696549088ebf5dab4a" exitCode=0 Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.585249 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"908353e6e26c8eb14aa15cfd6585d127a5cf2fd790c45d696549088ebf5dab4a"} Feb 19 19:34:39 crc kubenswrapper[4787]: I0219 19:34:39.585303 4787 scope.go:117] "RemoveContainer" containerID="eddfeaf72585fc8755796a91f30a98dc405a75dee35e13b5751f5a4b560c364c" Feb 19 19:34:40 crc kubenswrapper[4787]: I0219 19:34:40.593630 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"4705fa8f568a3ef2f81b22b29b62816c4b2d13e8cd966ddadc7147e1265dbf66"} Feb 19 19:34:40 crc kubenswrapper[4787]: I0219 19:34:40.595053 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" event={"ID":"b771e6b6-cd00-431a-84fb-970db07534bd","Type":"ContainerStarted","Data":"3cb69d122c664dae6f15efc83187d4a5a959603c9678c6e1fa138e95306ac410"} Feb 19 19:34:40 crc kubenswrapper[4787]: I0219 19:34:40.625916 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h6fxn" podStartSLOduration=2.7892059 podStartE2EDuration="7.62589533s" podCreationTimestamp="2026-02-19 19:34:33 +0000 UTC" firstStartedPulling="2026-02-19 19:34:34.821456516 +0000 UTC m=+942.612122458" lastFinishedPulling="2026-02-19 19:34:39.658145946 +0000 UTC m=+947.448811888" observedRunningTime="2026-02-19 19:34:40.623531981 +0000 UTC m=+948.414197953" watchObservedRunningTime="2026-02-19 19:34:40.62589533 +0000 UTC m=+948.416561272" Feb 19 19:34:42 crc kubenswrapper[4787]: I0219 19:34:42.611914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" event={"ID":"ace5ef3f-b2ed-4d41-a085-4c662e70061b","Type":"ContainerStarted","Data":"49730bef46ab68967ad2370031bfbb80584f10abfbd4ae58dfe1a94d0134b998"} Feb 19 19:34:42 crc kubenswrapper[4787]: I0219 19:34:42.613301 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:42 crc kubenswrapper[4787]: I0219 19:34:42.637206 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" podStartSLOduration=2.8811479 podStartE2EDuration="9.63718854s" podCreationTimestamp="2026-02-19 19:34:33 +0000 UTC" firstStartedPulling="2026-02-19 19:34:34.715765825 +0000 UTC m=+942.506431757" lastFinishedPulling="2026-02-19 19:34:41.471806455 +0000 UTC m=+949.262472397" observedRunningTime="2026-02-19 19:34:42.635909683 +0000 UTC m=+950.426575635" watchObservedRunningTime="2026-02-19 19:34:42.63718854 +0000 UTC m=+950.427854482" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.515640 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8d4b8"] Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.517723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.523548 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d4b8"] Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.715308 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-utilities\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.715619 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7sx\" (UniqueName: \"kubernetes.io/projected/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-kube-api-access-5v7sx\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.715830 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-catalog-content\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.817684 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v7sx\" (UniqueName: \"kubernetes.io/projected/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-kube-api-access-5v7sx\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.817849 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-catalog-content\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.817970 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-utilities\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.818341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-catalog-content\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.818469 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-utilities\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.844136 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v7sx\" (UniqueName: \"kubernetes.io/projected/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-kube-api-access-5v7sx\") pod \"certified-operators-8d4b8\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:43 crc kubenswrapper[4787]: I0219 19:34:43.899016 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.384060 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d4b8"] Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.422915 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5qk4m" Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.645677 4787 generic.go:334] "Generic (PLEG): container finished" podID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerID="bb9316a0d05fe67235eb240bb392293d31583e7f2361e4a21b36ab0c088a88f4" exitCode=0 Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.645768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4b8" event={"ID":"e1bf113e-75c3-4e8a-8dd8-bdc6792af982","Type":"ContainerDied","Data":"bb9316a0d05fe67235eb240bb392293d31583e7f2361e4a21b36ab0c088a88f4"} Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.645823 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4b8" event={"ID":"e1bf113e-75c3-4e8a-8dd8-bdc6792af982","Type":"ContainerStarted","Data":"1d5fdf2ce7a5429aa10db4591dcba13f630556c53fecdde1c54b807b7cc8acf8"} Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.769474 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.769539 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:44 crc kubenswrapper[4787]: I0219 19:34:44.776221 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:45 crc kubenswrapper[4787]: I0219 19:34:45.669015 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:34:45 crc kubenswrapper[4787]: I0219 19:34:45.730851 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76cdd75fc5-lmqvz"] Feb 19 19:34:46 crc kubenswrapper[4787]: I0219 19:34:46.674556 4787 generic.go:334] "Generic (PLEG): container finished" podID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerID="3cfafda76a87a34834c347afbcaafe7e00368517f5239a85726cdec60bdc330f" exitCode=0 Feb 19 19:34:46 crc kubenswrapper[4787]: I0219 19:34:46.674788 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4b8" event={"ID":"e1bf113e-75c3-4e8a-8dd8-bdc6792af982","Type":"ContainerDied","Data":"3cfafda76a87a34834c347afbcaafe7e00368517f5239a85726cdec60bdc330f"} Feb 19 19:34:47 crc kubenswrapper[4787]: I0219 19:34:47.684927 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4b8" event={"ID":"e1bf113e-75c3-4e8a-8dd8-bdc6792af982","Type":"ContainerStarted","Data":"6d1f4c598648a8c16618347a1bcbce8e4e4c7aaa399491d88762ebd30f9de9cf"} Feb 19 19:34:47 crc kubenswrapper[4787]: I0219 19:34:47.710262 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8d4b8" podStartSLOduration=2.214723431 podStartE2EDuration="4.710229537s" podCreationTimestamp="2026-02-19 19:34:43 +0000 UTC" firstStartedPulling="2026-02-19 19:34:44.647954425 +0000 UTC m=+952.438620407" lastFinishedPulling="2026-02-19 19:34:47.143460571 +0000 UTC m=+954.934126513" observedRunningTime="2026-02-19 19:34:47.70553784 +0000 UTC m=+955.496203822" watchObservedRunningTime="2026-02-19 19:34:47.710229537 +0000 UTC m=+955.500895489" Feb 19 19:34:53 crc kubenswrapper[4787]: I0219 19:34:53.899708 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:53 crc kubenswrapper[4787]: I0219 19:34:53.900228 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:53 crc kubenswrapper[4787]: I0219 19:34:53.952918 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:54 crc kubenswrapper[4787]: I0219 19:34:54.387499 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" Feb 19 19:34:54 crc kubenswrapper[4787]: I0219 19:34:54.792073 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:54 crc kubenswrapper[4787]: I0219 19:34:54.835726 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d4b8"] Feb 19 19:34:56 crc kubenswrapper[4787]: I0219 19:34:56.754387 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8d4b8" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="registry-server" containerID="cri-o://6d1f4c598648a8c16618347a1bcbce8e4e4c7aaa399491d88762ebd30f9de9cf" gracePeriod=2 Feb 19 19:34:57 crc kubenswrapper[4787]: I0219 19:34:57.764076 4787 generic.go:334] "Generic (PLEG): container finished" podID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerID="6d1f4c598648a8c16618347a1bcbce8e4e4c7aaa399491d88762ebd30f9de9cf" exitCode=0 Feb 19 19:34:57 crc kubenswrapper[4787]: I0219 19:34:57.764130 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4b8" event={"ID":"e1bf113e-75c3-4e8a-8dd8-bdc6792af982","Type":"ContainerDied","Data":"6d1f4c598648a8c16618347a1bcbce8e4e4c7aaa399491d88762ebd30f9de9cf"} Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.300198 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.395151 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v7sx\" (UniqueName: \"kubernetes.io/projected/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-kube-api-access-5v7sx\") pod \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.395493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-catalog-content\") pod \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.395644 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-utilities\") pod \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\" (UID: \"e1bf113e-75c3-4e8a-8dd8-bdc6792af982\") " Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.396353 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-utilities" (OuterVolumeSpecName: "utilities") pod "e1bf113e-75c3-4e8a-8dd8-bdc6792af982" (UID: "e1bf113e-75c3-4e8a-8dd8-bdc6792af982"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.404229 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-kube-api-access-5v7sx" (OuterVolumeSpecName: "kube-api-access-5v7sx") pod "e1bf113e-75c3-4e8a-8dd8-bdc6792af982" (UID: "e1bf113e-75c3-4e8a-8dd8-bdc6792af982"). InnerVolumeSpecName "kube-api-access-5v7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.444344 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1bf113e-75c3-4e8a-8dd8-bdc6792af982" (UID: "e1bf113e-75c3-4e8a-8dd8-bdc6792af982"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.496927 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v7sx\" (UniqueName: \"kubernetes.io/projected/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-kube-api-access-5v7sx\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.496964 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.496973 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1bf113e-75c3-4e8a-8dd8-bdc6792af982-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.661406 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-586zk"] Feb 19 19:34:58 crc kubenswrapper[4787]: E0219 19:34:58.661726 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="extract-content" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.661744 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="extract-content" Feb 19 19:34:58 crc kubenswrapper[4787]: E0219 19:34:58.661757 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="extract-utilities" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.661764 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="extract-utilities" Feb 19 19:34:58 crc kubenswrapper[4787]: E0219 19:34:58.661779 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="registry-server" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.661786 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="registry-server" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.661898 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" containerName="registry-server" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.663405 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.679298 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-586zk"] Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.777322 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d4b8" event={"ID":"e1bf113e-75c3-4e8a-8dd8-bdc6792af982","Type":"ContainerDied","Data":"1d5fdf2ce7a5429aa10db4591dcba13f630556c53fecdde1c54b807b7cc8acf8"} Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.777396 4787 scope.go:117] "RemoveContainer" containerID="6d1f4c598648a8c16618347a1bcbce8e4e4c7aaa399491d88762ebd30f9de9cf" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.777391 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d4b8" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.801273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-utilities\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.801593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-catalog-content\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.801838 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbgj\" (UniqueName: \"kubernetes.io/projected/2f5718e7-a687-4918-b55b-8339297871b5-kube-api-access-nhbgj\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.803945 4787 scope.go:117] "RemoveContainer" containerID="3cfafda76a87a34834c347afbcaafe7e00368517f5239a85726cdec60bdc330f" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.813418 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d4b8"] Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.820019 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8d4b8"] Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.830265 4787 scope.go:117] "RemoveContainer" containerID="bb9316a0d05fe67235eb240bb392293d31583e7f2361e4a21b36ab0c088a88f4" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.901770 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1bf113e-75c3-4e8a-8dd8-bdc6792af982" path="/var/lib/kubelet/pods/e1bf113e-75c3-4e8a-8dd8-bdc6792af982/volumes" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.903233 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-utilities\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.903288 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-catalog-content\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.903359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhbgj\" (UniqueName: \"kubernetes.io/projected/2f5718e7-a687-4918-b55b-8339297871b5-kube-api-access-nhbgj\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.903728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-catalog-content\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.903785 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-utilities\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.921489 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhbgj\" (UniqueName: \"kubernetes.io/projected/2f5718e7-a687-4918-b55b-8339297871b5-kube-api-access-nhbgj\") pod \"redhat-marketplace-586zk\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:58 crc kubenswrapper[4787]: I0219 19:34:58.997449 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:34:59 crc kubenswrapper[4787]: I0219 19:34:59.433160 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-586zk"] Feb 19 19:34:59 crc kubenswrapper[4787]: I0219 19:34:59.790574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-586zk" event={"ID":"2f5718e7-a687-4918-b55b-8339297871b5","Type":"ContainerDied","Data":"e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841"} Feb 19 19:34:59 crc kubenswrapper[4787]: I0219 19:34:59.790601 4787 generic.go:334] "Generic (PLEG): container finished" podID="2f5718e7-a687-4918-b55b-8339297871b5" containerID="e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841" exitCode=0 Feb 19 19:34:59 crc kubenswrapper[4787]: I0219 19:34:59.790967 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-586zk" event={"ID":"2f5718e7-a687-4918-b55b-8339297871b5","Type":"ContainerStarted","Data":"4a36ffb17f376722e6de414fccd7de88e27ec4aca8efb619d4176bda237c9262"} Feb 19 19:34:59 crc kubenswrapper[4787]: I0219 19:34:59.792735 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:35:01 crc kubenswrapper[4787]: I0219 19:35:01.807778 4787 generic.go:334] "Generic (PLEG): container finished" podID="2f5718e7-a687-4918-b55b-8339297871b5" containerID="4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466" exitCode=0 Feb 19 19:35:01 crc kubenswrapper[4787]: I0219 19:35:01.807900 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-586zk" event={"ID":"2f5718e7-a687-4918-b55b-8339297871b5","Type":"ContainerDied","Data":"4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466"} Feb 19 19:35:02 crc kubenswrapper[4787]: I0219 19:35:02.820560 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-586zk" event={"ID":"2f5718e7-a687-4918-b55b-8339297871b5","Type":"ContainerStarted","Data":"50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae"} Feb 19 19:35:02 crc kubenswrapper[4787]: I0219 19:35:02.839449 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-586zk" podStartSLOduration=2.248472576 podStartE2EDuration="4.839431585s" podCreationTimestamp="2026-02-19 19:34:58 +0000 UTC" firstStartedPulling="2026-02-19 19:34:59.792487669 +0000 UTC m=+967.583153611" lastFinishedPulling="2026-02-19 19:35:02.383446678 +0000 UTC m=+970.174112620" observedRunningTime="2026-02-19 19:35:02.838119317 +0000 UTC m=+970.628785279" watchObservedRunningTime="2026-02-19 19:35:02.839431585 +0000 UTC m=+970.630097527" Feb 19 19:35:08 crc kubenswrapper[4787]: I0219 19:35:08.998407 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:35:08 crc kubenswrapper[4787]: I0219 19:35:08.998928 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:35:09 crc kubenswrapper[4787]: I0219 19:35:09.045727 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:35:09 crc kubenswrapper[4787]: I0219 19:35:09.947632 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:35:10 crc kubenswrapper[4787]: I0219 19:35:10.008225 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-586zk"] Feb 19 19:35:10 crc kubenswrapper[4787]: I0219 19:35:10.833397 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76cdd75fc5-lmqvz" podUID="41596dfd-7706-4a0d-a152-026620b1d1b4" containerName="console" containerID="cri-o://7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757" gracePeriod=15 Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.310813 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76cdd75fc5-lmqvz_41596dfd-7706-4a0d-a152-026620b1d1b4/console/0.log" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.311138 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.413490 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-service-ca\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.414074 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-trusted-ca-bundle\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.414522 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-console-config\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.414014 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-service-ca" (OuterVolumeSpecName: "service-ca") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.414455 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.414856 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-console-config" (OuterVolumeSpecName: "console-config") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.415000 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-oauth-serving-cert\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.415109 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-oauth-config\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.415129 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-serving-cert\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.415541 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.416189 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgvdx\" (UniqueName: \"kubernetes.io/projected/41596dfd-7706-4a0d-a152-026620b1d1b4-kube-api-access-dgvdx\") pod \"41596dfd-7706-4a0d-a152-026620b1d1b4\" (UID: \"41596dfd-7706-4a0d-a152-026620b1d1b4\") " Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.416563 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.416578 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.416587 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.416630 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41596dfd-7706-4a0d-a152-026620b1d1b4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.426808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.433811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.433908 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41596dfd-7706-4a0d-a152-026620b1d1b4-kube-api-access-dgvdx" (OuterVolumeSpecName: "kube-api-access-dgvdx") pod "41596dfd-7706-4a0d-a152-026620b1d1b4" (UID: "41596dfd-7706-4a0d-a152-026620b1d1b4"). InnerVolumeSpecName "kube-api-access-dgvdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.517796 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgvdx\" (UniqueName: \"kubernetes.io/projected/41596dfd-7706-4a0d-a152-026620b1d1b4-kube-api-access-dgvdx\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.517829 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.517842 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41596dfd-7706-4a0d-a152-026620b1d1b4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.922810 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76cdd75fc5-lmqvz_41596dfd-7706-4a0d-a152-026620b1d1b4/console/0.log" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.922860 4787 generic.go:334] "Generic (PLEG): container finished" podID="41596dfd-7706-4a0d-a152-026620b1d1b4" containerID="7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757" exitCode=2 Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.923082 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-586zk" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="registry-server" containerID="cri-o://50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae" gracePeriod=2 Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.923436 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76cdd75fc5-lmqvz" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.933233 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cdd75fc5-lmqvz" event={"ID":"41596dfd-7706-4a0d-a152-026620b1d1b4","Type":"ContainerDied","Data":"7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757"} Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.933280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76cdd75fc5-lmqvz" event={"ID":"41596dfd-7706-4a0d-a152-026620b1d1b4","Type":"ContainerDied","Data":"ee05c2c1d070d51dfbee3719d11c1bdf301540c88db0b9b076379eff8f993935"} Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.933297 4787 scope.go:117] "RemoveContainer" containerID="7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.962788 4787 scope.go:117] "RemoveContainer" containerID="7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757" Feb 19 19:35:11 crc kubenswrapper[4787]: E0219 19:35:11.963461 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757\": container with ID starting with 7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757 not found: ID does not exist" containerID="7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.963517 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757"} err="failed to get container status \"7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757\": rpc error: code = NotFound desc = could not find container \"7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757\": container with ID starting with 7a881952548720ea1df560e2646d9bafedd7e67d1af5f30c1786587ef5581757 not found: ID does not exist" Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.966489 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76cdd75fc5-lmqvz"] Feb 19 19:35:11 crc kubenswrapper[4787]: I0219 19:35:11.975447 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76cdd75fc5-lmqvz"] Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.351572 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.515994 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj"] Feb 19 19:35:12 crc kubenswrapper[4787]: E0219 19:35:12.516338 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="extract-content" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.516391 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="extract-content" Feb 19 19:35:12 crc kubenswrapper[4787]: E0219 19:35:12.516423 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="registry-server" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.516429 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="registry-server" Feb 19 19:35:12 crc kubenswrapper[4787]: E0219 19:35:12.516439 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="extract-utilities" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.516445 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="extract-utilities" Feb 19 19:35:12 crc kubenswrapper[4787]: E0219 19:35:12.516453 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41596dfd-7706-4a0d-a152-026620b1d1b4" containerName="console" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.516459 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="41596dfd-7706-4a0d-a152-026620b1d1b4" containerName="console" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.516580 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f5718e7-a687-4918-b55b-8339297871b5" containerName="registry-server" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.516593 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="41596dfd-7706-4a0d-a152-026620b1d1b4" containerName="console" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.517637 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.520008 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.528260 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj"] Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.558742 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-catalog-content\") pod \"2f5718e7-a687-4918-b55b-8339297871b5\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.558850 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhbgj\" (UniqueName: \"kubernetes.io/projected/2f5718e7-a687-4918-b55b-8339297871b5-kube-api-access-nhbgj\") pod \"2f5718e7-a687-4918-b55b-8339297871b5\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.558941 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-utilities\") pod \"2f5718e7-a687-4918-b55b-8339297871b5\" (UID: \"2f5718e7-a687-4918-b55b-8339297871b5\") " Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.560489 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-utilities" (OuterVolumeSpecName: "utilities") pod "2f5718e7-a687-4918-b55b-8339297871b5" (UID: "2f5718e7-a687-4918-b55b-8339297871b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.565514 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f5718e7-a687-4918-b55b-8339297871b5-kube-api-access-nhbgj" (OuterVolumeSpecName: "kube-api-access-nhbgj") pod "2f5718e7-a687-4918-b55b-8339297871b5" (UID: "2f5718e7-a687-4918-b55b-8339297871b5"). InnerVolumeSpecName "kube-api-access-nhbgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.587266 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f5718e7-a687-4918-b55b-8339297871b5" (UID: "2f5718e7-a687-4918-b55b-8339297871b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.660589 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.660758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dvb\" (UniqueName: \"kubernetes.io/projected/82c7999d-e5ab-4203-8a11-784941050660-kube-api-access-w6dvb\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.660784 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.660937 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhbgj\" (UniqueName: \"kubernetes.io/projected/2f5718e7-a687-4918-b55b-8339297871b5-kube-api-access-nhbgj\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.660950 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.660960 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f5718e7-a687-4918-b55b-8339297871b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.762966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6dvb\" (UniqueName: \"kubernetes.io/projected/82c7999d-e5ab-4203-8a11-784941050660-kube-api-access-w6dvb\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.763060 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.763151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.763627 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.763752 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.779753 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6dvb\" (UniqueName: \"kubernetes.io/projected/82c7999d-e5ab-4203-8a11-784941050660-kube-api-access-w6dvb\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.895698 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.904504 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41596dfd-7706-4a0d-a152-026620b1d1b4" path="/var/lib/kubelet/pods/41596dfd-7706-4a0d-a152-026620b1d1b4/volumes" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.931794 4787 generic.go:334] "Generic (PLEG): container finished" podID="2f5718e7-a687-4918-b55b-8339297871b5" containerID="50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae" exitCode=0 Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.931869 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-586zk" event={"ID":"2f5718e7-a687-4918-b55b-8339297871b5","Type":"ContainerDied","Data":"50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae"} Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.932140 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-586zk" event={"ID":"2f5718e7-a687-4918-b55b-8339297871b5","Type":"ContainerDied","Data":"4a36ffb17f376722e6de414fccd7de88e27ec4aca8efb619d4176bda237c9262"} Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.932165 4787 scope.go:117] "RemoveContainer" containerID="50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.931895 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-586zk" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.957008 4787 scope.go:117] "RemoveContainer" containerID="4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466" Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.964697 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-586zk"] Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.974900 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-586zk"] Feb 19 19:35:12 crc kubenswrapper[4787]: I0219 19:35:12.980419 4787 scope.go:117] "RemoveContainer" containerID="e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.003453 4787 scope.go:117] "RemoveContainer" containerID="50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae" Feb 19 19:35:13 crc kubenswrapper[4787]: E0219 19:35:13.003929 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae\": container with ID starting with 50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae not found: ID does not exist" containerID="50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.003964 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae"} err="failed to get container status \"50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae\": rpc error: code = NotFound desc = could not find container \"50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae\": container with ID starting with 50baa1a4ad26ed2c4bb0260f72afbb68e2a36d7424e313048e9b633df0b317ae not found: ID does not exist" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.003991 4787 scope.go:117] "RemoveContainer" containerID="4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466" Feb 19 19:35:13 crc kubenswrapper[4787]: E0219 19:35:13.006075 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466\": container with ID starting with 4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466 not found: ID does not exist" containerID="4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.006096 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466"} err="failed to get container status \"4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466\": rpc error: code = NotFound desc = could not find container \"4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466\": container with ID starting with 4477d16f0f39a7380c4431c7fbcd9e00cfb8d8ffb66acad9d1a1f2d09dc4b466 not found: ID does not exist" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.006111 4787 scope.go:117] "RemoveContainer" containerID="e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841" Feb 19 19:35:13 crc kubenswrapper[4787]: E0219 19:35:13.006392 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841\": container with ID starting with e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841 not found: ID does not exist" containerID="e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.006428 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841"} err="failed to get container status \"e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841\": rpc error: code = NotFound desc = could not find container \"e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841\": container with ID starting with e7d7deadb3947bfe3eda40df4b6082956efa8a6e44d11a70059e5a443eaa5841 not found: ID does not exist" Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.310042 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj"] Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.941098 4787 generic.go:334] "Generic (PLEG): container finished" podID="82c7999d-e5ab-4203-8a11-784941050660" containerID="bb6e9b30c228cd34220329a81819206876f8ac8fee4985c5ba4fbe4c049678c0" exitCode=0 Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.941153 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" event={"ID":"82c7999d-e5ab-4203-8a11-784941050660","Type":"ContainerDied","Data":"bb6e9b30c228cd34220329a81819206876f8ac8fee4985c5ba4fbe4c049678c0"} Feb 19 19:35:13 crc kubenswrapper[4787]: I0219 19:35:13.941497 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" event={"ID":"82c7999d-e5ab-4203-8a11-784941050660","Type":"ContainerStarted","Data":"7687ace50d6600ab9bd4260a6efe4c9dfac77a2942cbd92abb9d352abaa1b540"} Feb 19 19:35:14 crc kubenswrapper[4787]: I0219 19:35:14.899774 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f5718e7-a687-4918-b55b-8339297871b5" path="/var/lib/kubelet/pods/2f5718e7-a687-4918-b55b-8339297871b5/volumes" Feb 19 19:35:15 crc kubenswrapper[4787]: I0219 19:35:15.961704 4787 generic.go:334] "Generic (PLEG): container finished" podID="82c7999d-e5ab-4203-8a11-784941050660" containerID="97d8cb690fbfd1e95809c43d85e3771663e65de7a0188ae69896cc37c9996367" exitCode=0 Feb 19 19:35:15 crc kubenswrapper[4787]: I0219 19:35:15.961799 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" event={"ID":"82c7999d-e5ab-4203-8a11-784941050660","Type":"ContainerDied","Data":"97d8cb690fbfd1e95809c43d85e3771663e65de7a0188ae69896cc37c9996367"} Feb 19 19:35:16 crc kubenswrapper[4787]: I0219 19:35:16.973102 4787 generic.go:334] "Generic (PLEG): container finished" podID="82c7999d-e5ab-4203-8a11-784941050660" containerID="24e0ed83437f98b16351150b294e8d7f06c4b4a0f5bfb1bbc1ffa64e542990f8" exitCode=0 Feb 19 19:35:16 crc kubenswrapper[4787]: I0219 19:35:16.973411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" event={"ID":"82c7999d-e5ab-4203-8a11-784941050660","Type":"ContainerDied","Data":"24e0ed83437f98b16351150b294e8d7f06c4b4a0f5bfb1bbc1ffa64e542990f8"} Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.334977 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.444731 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-util\") pod \"82c7999d-e5ab-4203-8a11-784941050660\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.444921 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6dvb\" (UniqueName: \"kubernetes.io/projected/82c7999d-e5ab-4203-8a11-784941050660-kube-api-access-w6dvb\") pod \"82c7999d-e5ab-4203-8a11-784941050660\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.444983 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-bundle\") pod \"82c7999d-e5ab-4203-8a11-784941050660\" (UID: \"82c7999d-e5ab-4203-8a11-784941050660\") " Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.445976 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-bundle" (OuterVolumeSpecName: "bundle") pod "82c7999d-e5ab-4203-8a11-784941050660" (UID: "82c7999d-e5ab-4203-8a11-784941050660"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.452186 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c7999d-e5ab-4203-8a11-784941050660-kube-api-access-w6dvb" (OuterVolumeSpecName: "kube-api-access-w6dvb") pod "82c7999d-e5ab-4203-8a11-784941050660" (UID: "82c7999d-e5ab-4203-8a11-784941050660"). InnerVolumeSpecName "kube-api-access-w6dvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.458754 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-util" (OuterVolumeSpecName: "util") pod "82c7999d-e5ab-4203-8a11-784941050660" (UID: "82c7999d-e5ab-4203-8a11-784941050660"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.546660 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.546705 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82c7999d-e5ab-4203-8a11-784941050660-util\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.546720 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6dvb\" (UniqueName: \"kubernetes.io/projected/82c7999d-e5ab-4203-8a11-784941050660-kube-api-access-w6dvb\") on node \"crc\" DevicePath \"\"" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.988052 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" event={"ID":"82c7999d-e5ab-4203-8a11-784941050660","Type":"ContainerDied","Data":"7687ace50d6600ab9bd4260a6efe4c9dfac77a2942cbd92abb9d352abaa1b540"} Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.988094 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7687ace50d6600ab9bd4260a6efe4c9dfac77a2942cbd92abb9d352abaa1b540" Feb 19 19:35:18 crc kubenswrapper[4787]: I0219 19:35:18.988117 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.148245 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs"] Feb 19 19:35:28 crc kubenswrapper[4787]: E0219 19:35:28.148926 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="pull" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.148938 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="pull" Feb 19 19:35:28 crc kubenswrapper[4787]: E0219 19:35:28.148955 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="extract" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.148961 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="extract" Feb 19 19:35:28 crc kubenswrapper[4787]: E0219 19:35:28.148973 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="util" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.148980 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="util" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.149102 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c7999d-e5ab-4203-8a11-784941050660" containerName="extract" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.149581 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.151935 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.152318 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.152382 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.154961 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.155277 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-52n28" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.169196 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs"] Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.298554 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqjl\" (UniqueName: \"kubernetes.io/projected/58f1bc3e-9217-48c3-80af-e4979969b991-kube-api-access-rcqjl\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.298824 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58f1bc3e-9217-48c3-80af-e4979969b991-apiservice-cert\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.298869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58f1bc3e-9217-48c3-80af-e4979969b991-webhook-cert\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.400819 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqjl\" (UniqueName: \"kubernetes.io/projected/58f1bc3e-9217-48c3-80af-e4979969b991-kube-api-access-rcqjl\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.401201 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58f1bc3e-9217-48c3-80af-e4979969b991-apiservice-cert\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.401243 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58f1bc3e-9217-48c3-80af-e4979969b991-webhook-cert\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.410076 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss"] Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.410290 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58f1bc3e-9217-48c3-80af-e4979969b991-webhook-cert\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.410527 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58f1bc3e-9217-48c3-80af-e4979969b991-apiservice-cert\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.411152 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.413702 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gxv2s" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.413747 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.413811 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.425999 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss"] Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.427012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqjl\" (UniqueName: \"kubernetes.io/projected/58f1bc3e-9217-48c3-80af-e4979969b991-kube-api-access-rcqjl\") pod \"metallb-operator-controller-manager-59989c9b4f-q9rqs\" (UID: \"58f1bc3e-9217-48c3-80af-e4979969b991\") " pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.502602 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2428ab4-02d6-4400-820b-995a002fb38c-webhook-cert\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.502702 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfnn\" (UniqueName: \"kubernetes.io/projected/a2428ab4-02d6-4400-820b-995a002fb38c-kube-api-access-scfnn\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.502744 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2428ab4-02d6-4400-820b-995a002fb38c-apiservice-cert\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.512921 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.603508 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2428ab4-02d6-4400-820b-995a002fb38c-webhook-cert\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.603572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfnn\" (UniqueName: \"kubernetes.io/projected/a2428ab4-02d6-4400-820b-995a002fb38c-kube-api-access-scfnn\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.603620 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2428ab4-02d6-4400-820b-995a002fb38c-apiservice-cert\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.609349 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a2428ab4-02d6-4400-820b-995a002fb38c-webhook-cert\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.611148 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a2428ab4-02d6-4400-820b-995a002fb38c-apiservice-cert\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.622641 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfnn\" (UniqueName: \"kubernetes.io/projected/a2428ab4-02d6-4400-820b-995a002fb38c-kube-api-access-scfnn\") pod \"metallb-operator-webhook-server-69988d54ff-ndzss\" (UID: \"a2428ab4-02d6-4400-820b-995a002fb38c\") " pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.788558 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:28 crc kubenswrapper[4787]: I0219 19:35:28.984058 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs"] Feb 19 19:35:29 crc kubenswrapper[4787]: I0219 19:35:29.057033 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" event={"ID":"58f1bc3e-9217-48c3-80af-e4979969b991","Type":"ContainerStarted","Data":"085a69532255b1a5c7d5c2e6cdda731dcb503df936a48bda89084d0d3828f272"} Feb 19 19:35:29 crc kubenswrapper[4787]: I0219 19:35:29.202635 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss"] Feb 19 19:35:29 crc kubenswrapper[4787]: W0219 19:35:29.207694 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2428ab4_02d6_4400_820b_995a002fb38c.slice/crio-62802aae20ca83cf170b4545bc286a142139fd10b5b15f4d08f1946528f5cb39 WatchSource:0}: Error finding container 62802aae20ca83cf170b4545bc286a142139fd10b5b15f4d08f1946528f5cb39: Status 404 returned error can't find the container with id 62802aae20ca83cf170b4545bc286a142139fd10b5b15f4d08f1946528f5cb39 Feb 19 19:35:30 crc kubenswrapper[4787]: I0219 19:35:30.070334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" event={"ID":"a2428ab4-02d6-4400-820b-995a002fb38c","Type":"ContainerStarted","Data":"62802aae20ca83cf170b4545bc286a142139fd10b5b15f4d08f1946528f5cb39"} Feb 19 19:35:34 crc kubenswrapper[4787]: I0219 19:35:34.103270 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" event={"ID":"58f1bc3e-9217-48c3-80af-e4979969b991","Type":"ContainerStarted","Data":"5f06b202c4b295287954fa1ff89b8167472dbcbc71c73f01f154be1e3c7b9ed4"} Feb 19 19:35:34 crc kubenswrapper[4787]: I0219 19:35:34.104817 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:35:34 crc kubenswrapper[4787]: I0219 19:35:34.106776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" event={"ID":"a2428ab4-02d6-4400-820b-995a002fb38c","Type":"ContainerStarted","Data":"2edd4cb1e91b309c9be805a76934948a8f2e6b0fafdfd83cf7020984ee615ae9"} Feb 19 19:35:34 crc kubenswrapper[4787]: I0219 19:35:34.106920 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:35:34 crc kubenswrapper[4787]: I0219 19:35:34.135499 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" podStartSLOduration=1.350512848 podStartE2EDuration="6.135483953s" podCreationTimestamp="2026-02-19 19:35:28 +0000 UTC" firstStartedPulling="2026-02-19 19:35:28.988693766 +0000 UTC m=+996.779359708" lastFinishedPulling="2026-02-19 19:35:33.773664871 +0000 UTC m=+1001.564330813" observedRunningTime="2026-02-19 19:35:34.12665804 +0000 UTC m=+1001.917323982" watchObservedRunningTime="2026-02-19 19:35:34.135483953 +0000 UTC m=+1001.926149895" Feb 19 19:35:34 crc kubenswrapper[4787]: I0219 19:35:34.158107 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" podStartSLOduration=1.5793675550000001 podStartE2EDuration="6.158085899s" podCreationTimestamp="2026-02-19 19:35:28 +0000 UTC" firstStartedPulling="2026-02-19 19:35:29.211502961 +0000 UTC m=+997.002168903" lastFinishedPulling="2026-02-19 19:35:33.790221305 +0000 UTC m=+1001.580887247" observedRunningTime="2026-02-19 19:35:34.151551172 +0000 UTC m=+1001.942217124" watchObservedRunningTime="2026-02-19 19:35:34.158085899 +0000 UTC m=+1001.948751841" Feb 19 19:35:48 crc kubenswrapper[4787]: I0219 19:35:48.794483 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" Feb 19 19:36:08 crc kubenswrapper[4787]: I0219 19:36:08.515762 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.224690 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5"] Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.226346 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.230045 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.230423 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-m9ndh" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.232187 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mvnk9"] Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.236723 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.240863 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.241181 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.246378 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5"] Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.303543 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztb8b\" (UniqueName: \"kubernetes.io/projected/d56f4bb8-5768-45e0-9cf9-6d759249fe69-kube-api-access-ztb8b\") pod \"frr-k8s-webhook-server-78b44bf5bb-dq2f5\" (UID: \"d56f4bb8-5768-45e0-9cf9-6d759249fe69\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.303653 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d56f4bb8-5768-45e0-9cf9-6d759249fe69-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-dq2f5\" (UID: \"d56f4bb8-5768-45e0-9cf9-6d759249fe69\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.324374 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pn7bv"] Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.325635 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.329636 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.329806 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bwqtb" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.329911 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.341103 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.353244 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-9sqnr"] Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.357751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.361330 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.376285 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-9sqnr"] Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405207 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-metrics\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405253 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztb8b\" (UniqueName: \"kubernetes.io/projected/d56f4bb8-5768-45e0-9cf9-6d759249fe69-kube-api-access-ztb8b\") pod \"frr-k8s-webhook-server-78b44bf5bb-dq2f5\" (UID: \"d56f4bb8-5768-45e0-9cf9-6d759249fe69\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405285 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-startup\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-metrics-certs\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405330 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-reloader\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405349 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eeee751-e7e9-412b-81cf-2bd7e702303d-metrics-certs\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d56f4bb8-5768-45e0-9cf9-6d759249fe69-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-dq2f5\" (UID: \"d56f4bb8-5768-45e0-9cf9-6d759249fe69\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405406 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndppp\" (UniqueName: \"kubernetes.io/projected/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-kube-api-access-ndppp\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405429 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-metallb-excludel2\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405483 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-conf\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405498 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-sockets\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.405517 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgq9l\" (UniqueName: \"kubernetes.io/projected/8eeee751-e7e9-412b-81cf-2bd7e702303d-kube-api-access-cgq9l\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.421332 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d56f4bb8-5768-45e0-9cf9-6d759249fe69-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-dq2f5\" (UID: \"d56f4bb8-5768-45e0-9cf9-6d759249fe69\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.421700 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztb8b\" (UniqueName: \"kubernetes.io/projected/d56f4bb8-5768-45e0-9cf9-6d759249fe69-kube-api-access-ztb8b\") pod \"frr-k8s-webhook-server-78b44bf5bb-dq2f5\" (UID: \"d56f4bb8-5768-45e0-9cf9-6d759249fe69\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506750 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-metrics\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506811 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-startup\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-cert\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506859 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-metrics-certs\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506880 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-reloader\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eeee751-e7e9-412b-81cf-2bd7e702303d-metrics-certs\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506945 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndppp\" (UniqueName: \"kubernetes.io/projected/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-kube-api-access-ndppp\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.506981 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-metallb-excludel2\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.507010 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqmv\" (UniqueName: \"kubernetes.io/projected/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-kube-api-access-jlqmv\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.507026 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-metrics-certs\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.507052 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-conf\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.507067 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-sockets\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.507085 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq9l\" (UniqueName: \"kubernetes.io/projected/8eeee751-e7e9-412b-81cf-2bd7e702303d-kube-api-access-cgq9l\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.507722 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-metrics\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.508347 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-startup\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: E0219 19:36:09.508999 4787 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 19:36:09 crc kubenswrapper[4787]: E0219 19:36:09.509086 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist podName:af7a16ca-ed17-45d6-aa9e-f2552dc92af7 nodeName:}" failed. No retries permitted until 2026-02-19 19:36:10.009062894 +0000 UTC m=+1037.799728916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist") pod "speaker-pn7bv" (UID: "af7a16ca-ed17-45d6-aa9e-f2552dc92af7") : secret "metallb-memberlist" not found Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.509116 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-conf\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.509204 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-metallb-excludel2\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.509273 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-frr-sockets\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.509400 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8eeee751-e7e9-412b-81cf-2bd7e702303d-reloader\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.511266 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-metrics-certs\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.518168 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8eeee751-e7e9-412b-81cf-2bd7e702303d-metrics-certs\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.529363 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndppp\" (UniqueName: \"kubernetes.io/projected/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-kube-api-access-ndppp\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.532541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgq9l\" (UniqueName: \"kubernetes.io/projected/8eeee751-e7e9-412b-81cf-2bd7e702303d-kube-api-access-cgq9l\") pod \"frr-k8s-mvnk9\" (UID: \"8eeee751-e7e9-412b-81cf-2bd7e702303d\") " pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.557826 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.566901 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.624483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlqmv\" (UniqueName: \"kubernetes.io/projected/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-kube-api-access-jlqmv\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.624538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-metrics-certs\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.624673 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-cert\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.634986 4787 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.638020 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-metrics-certs\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.643145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlqmv\" (UniqueName: \"kubernetes.io/projected/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-kube-api-access-jlqmv\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.647721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56aef487-656a-47b1-b3b4-d9fe6f62b1f4-cert\") pod \"controller-69bbfbf88f-9sqnr\" (UID: \"56aef487-656a-47b1-b3b4-d9fe6f62b1f4\") " pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:09 crc kubenswrapper[4787]: I0219 19:36:09.675554 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.032709 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:10 crc kubenswrapper[4787]: E0219 19:36:10.032870 4787 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 19:36:10 crc kubenswrapper[4787]: E0219 19:36:10.033571 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist podName:af7a16ca-ed17-45d6-aa9e-f2552dc92af7 nodeName:}" failed. No retries permitted until 2026-02-19 19:36:11.033556159 +0000 UTC m=+1038.824222101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist") pod "speaker-pn7bv" (UID: "af7a16ca-ed17-45d6-aa9e-f2552dc92af7") : secret "metallb-memberlist" not found Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.056504 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5"] Feb 19 19:36:10 crc kubenswrapper[4787]: W0219 19:36:10.058885 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd56f4bb8_5768_45e0_9cf9_6d759249fe69.slice/crio-30359367b2a12a4fd46cff9a979888315ee3128d7a365f78ea0c491a6f2e666e WatchSource:0}: Error finding container 30359367b2a12a4fd46cff9a979888315ee3128d7a365f78ea0c491a6f2e666e: Status 404 returned error can't find the container with id 30359367b2a12a4fd46cff9a979888315ee3128d7a365f78ea0c491a6f2e666e Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.154095 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-9sqnr"] Feb 19 19:36:10 crc kubenswrapper[4787]: W0219 19:36:10.162195 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56aef487_656a_47b1_b3b4_d9fe6f62b1f4.slice/crio-a67ca26a0c2b837fc0215c50e6b13796cb709c87d083c40973daa3e196e96c39 WatchSource:0}: Error finding container a67ca26a0c2b837fc0215c50e6b13796cb709c87d083c40973daa3e196e96c39: Status 404 returned error can't find the container with id a67ca26a0c2b837fc0215c50e6b13796cb709c87d083c40973daa3e196e96c39 Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.483340 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"a5dd628f593cc60b72a064a78db0cbc97b94ea97b5ea11e47ef3f56e4f214220"} Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.484276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" event={"ID":"d56f4bb8-5768-45e0-9cf9-6d759249fe69","Type":"ContainerStarted","Data":"30359367b2a12a4fd46cff9a979888315ee3128d7a365f78ea0c491a6f2e666e"} Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.485244 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-9sqnr" event={"ID":"56aef487-656a-47b1-b3b4-d9fe6f62b1f4","Type":"ContainerStarted","Data":"1522c7a340d8d93232435020a42777f7b810e9b4f5df606a3678443769df78a4"} Feb 19 19:36:10 crc kubenswrapper[4787]: I0219 19:36:10.485268 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-9sqnr" event={"ID":"56aef487-656a-47b1-b3b4-d9fe6f62b1f4","Type":"ContainerStarted","Data":"a67ca26a0c2b837fc0215c50e6b13796cb709c87d083c40973daa3e196e96c39"} Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.050093 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.063402 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/af7a16ca-ed17-45d6-aa9e-f2552dc92af7-memberlist\") pod \"speaker-pn7bv\" (UID: \"af7a16ca-ed17-45d6-aa9e-f2552dc92af7\") " pod="metallb-system/speaker-pn7bv" Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.142653 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pn7bv" Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.502018 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-9sqnr" event={"ID":"56aef487-656a-47b1-b3b4-d9fe6f62b1f4","Type":"ContainerStarted","Data":"64f541415dbc646ed0cf9c763ed1bf7787ad806288cd3c293a578daa079f33f9"} Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.502405 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.523032 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pn7bv" event={"ID":"af7a16ca-ed17-45d6-aa9e-f2552dc92af7","Type":"ContainerStarted","Data":"4986e2600724cd5da7f30a369f7745161e735577d27e573aba9cbb3d1f7e2a9c"} Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.523080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pn7bv" event={"ID":"af7a16ca-ed17-45d6-aa9e-f2552dc92af7","Type":"ContainerStarted","Data":"71841d2cf3c8576883638d7307fafbe294ade01ef5a80254d65dec4fae45754c"} Feb 19 19:36:11 crc kubenswrapper[4787]: I0219 19:36:11.555324 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-9sqnr" podStartSLOduration=2.555297355 podStartE2EDuration="2.555297355s" podCreationTimestamp="2026-02-19 19:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:36:11.544122415 +0000 UTC m=+1039.334788367" watchObservedRunningTime="2026-02-19 19:36:11.555297355 +0000 UTC m=+1039.345963297" Feb 19 19:36:12 crc kubenswrapper[4787]: I0219 19:36:12.531759 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pn7bv" event={"ID":"af7a16ca-ed17-45d6-aa9e-f2552dc92af7","Type":"ContainerStarted","Data":"0081d79a1b370899f3e028d2a146fc667b0aec00bb9023d7e0d7a25ddbaa42dc"} Feb 19 19:36:12 crc kubenswrapper[4787]: I0219 19:36:12.531874 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pn7bv" Feb 19 19:36:12 crc kubenswrapper[4787]: I0219 19:36:12.565094 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pn7bv" podStartSLOduration=3.56507665 podStartE2EDuration="3.56507665s" podCreationTimestamp="2026-02-19 19:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:36:12.553328394 +0000 UTC m=+1040.343994356" watchObservedRunningTime="2026-02-19 19:36:12.56507665 +0000 UTC m=+1040.355742582" Feb 19 19:36:18 crc kubenswrapper[4787]: I0219 19:36:18.587234 4787 generic.go:334] "Generic (PLEG): container finished" podID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerID="0877f46f7e8b12352161c6bb89739c3da2c9247a2bb48ecea38d75d6d465a58b" exitCode=0 Feb 19 19:36:18 crc kubenswrapper[4787]: I0219 19:36:18.587338 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerDied","Data":"0877f46f7e8b12352161c6bb89739c3da2c9247a2bb48ecea38d75d6d465a58b"} Feb 19 19:36:18 crc kubenswrapper[4787]: I0219 19:36:18.589634 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" event={"ID":"d56f4bb8-5768-45e0-9cf9-6d759249fe69","Type":"ContainerStarted","Data":"8c32098a950285e0437febf5180bf834a89935adfbb1997536e1a78fb3e97c01"} Feb 19 19:36:18 crc kubenswrapper[4787]: I0219 19:36:18.589741 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:18 crc kubenswrapper[4787]: I0219 19:36:18.626194 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" podStartSLOduration=2.125394456 podStartE2EDuration="9.626174931s" podCreationTimestamp="2026-02-19 19:36:09 +0000 UTC" firstStartedPulling="2026-02-19 19:36:10.061051796 +0000 UTC m=+1037.851717738" lastFinishedPulling="2026-02-19 19:36:17.561832271 +0000 UTC m=+1045.352498213" observedRunningTime="2026-02-19 19:36:18.62477628 +0000 UTC m=+1046.415442222" watchObservedRunningTime="2026-02-19 19:36:18.626174931 +0000 UTC m=+1046.416840873" Feb 19 19:36:19 crc kubenswrapper[4787]: I0219 19:36:19.601414 4787 generic.go:334] "Generic (PLEG): container finished" podID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerID="9ff2c31ddc6067f81e791c1a71916bea3ecc733b2c026288a6ee1cb9f42eaa83" exitCode=0 Feb 19 19:36:19 crc kubenswrapper[4787]: I0219 19:36:19.601561 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerDied","Data":"9ff2c31ddc6067f81e791c1a71916bea3ecc733b2c026288a6ee1cb9f42eaa83"} Feb 19 19:36:20 crc kubenswrapper[4787]: I0219 19:36:20.610811 4787 generic.go:334] "Generic (PLEG): container finished" podID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerID="0224f41a7770433269ecd2feaae858f90c94aaa09792d9b3bbc5994358855877" exitCode=0 Feb 19 19:36:20 crc kubenswrapper[4787]: I0219 19:36:20.610893 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerDied","Data":"0224f41a7770433269ecd2feaae858f90c94aaa09792d9b3bbc5994358855877"} Feb 19 19:36:21 crc kubenswrapper[4787]: I0219 19:36:21.147155 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pn7bv" Feb 19 19:36:21 crc kubenswrapper[4787]: I0219 19:36:21.622953 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"e62f3cd1732a5bc0d6ed5a30c5510ba9f6c358a54de0028ba97768a071f8d2b8"} Feb 19 19:36:21 crc kubenswrapper[4787]: I0219 19:36:21.623005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"02cc54d6d6d19f66cf2f13cf041171b67fb7f1e4a77ac4df80a33918c8a0064a"} Feb 19 19:36:21 crc kubenswrapper[4787]: I0219 19:36:21.623019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"ab1b00420017b027278de30e6fe38ecd03fa1abecd008fab2988b0ddae8394c5"} Feb 19 19:36:21 crc kubenswrapper[4787]: I0219 19:36:21.623032 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"b79d39fcaf357f8a5a0ce33fe464792f734009205e08c006ac00d45391fd2b76"} Feb 19 19:36:21 crc kubenswrapper[4787]: I0219 19:36:21.623043 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"5c40bfc1972cb6c00155765a35e67f3ecc279faf233dced75a9924a9b5535487"} Feb 19 19:36:22 crc kubenswrapper[4787]: I0219 19:36:22.649295 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"d09e15678b4839447d01db9fd2076eb2654b92d23ad288029ae53e8aa58ca826"} Feb 19 19:36:22 crc kubenswrapper[4787]: I0219 19:36:22.649689 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.026333 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mvnk9" podStartSLOduration=7.218414664 podStartE2EDuration="15.026316666s" podCreationTimestamp="2026-02-19 19:36:09 +0000 UTC" firstStartedPulling="2026-02-19 19:36:09.754399723 +0000 UTC m=+1037.545065665" lastFinishedPulling="2026-02-19 19:36:17.562301725 +0000 UTC m=+1045.352967667" observedRunningTime="2026-02-19 19:36:22.679341146 +0000 UTC m=+1050.470007088" watchObservedRunningTime="2026-02-19 19:36:24.026316666 +0000 UTC m=+1051.816982608" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.032187 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2mzkm"] Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.033876 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.037006 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7l68q" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.037042 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.042252 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.069683 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2mzkm"] Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.208236 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tqz\" (UniqueName: \"kubernetes.io/projected/4bd7ec59-a31b-440d-84ae-a52778899e79-kube-api-access-p5tqz\") pod \"openstack-operator-index-2mzkm\" (UID: \"4bd7ec59-a31b-440d-84ae-a52778899e79\") " pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.309832 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tqz\" (UniqueName: \"kubernetes.io/projected/4bd7ec59-a31b-440d-84ae-a52778899e79-kube-api-access-p5tqz\") pod \"openstack-operator-index-2mzkm\" (UID: \"4bd7ec59-a31b-440d-84ae-a52778899e79\") " pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.338480 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tqz\" (UniqueName: \"kubernetes.io/projected/4bd7ec59-a31b-440d-84ae-a52778899e79-kube-api-access-p5tqz\") pod \"openstack-operator-index-2mzkm\" (UID: \"4bd7ec59-a31b-440d-84ae-a52778899e79\") " pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.373587 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.568017 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.608676 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:24 crc kubenswrapper[4787]: I0219 19:36:24.821793 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2mzkm"] Feb 19 19:36:25 crc kubenswrapper[4787]: I0219 19:36:25.688768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2mzkm" event={"ID":"4bd7ec59-a31b-440d-84ae-a52778899e79","Type":"ContainerStarted","Data":"f97b3af3281cd89667938d92fb4d41270c693c18a72d5ea0a5cc65522305b7e1"} Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.012147 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2mzkm"] Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.622253 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zrns9"] Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.623427 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.630551 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zrns9"] Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.663158 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrmpf\" (UniqueName: \"kubernetes.io/projected/de424d05-0977-4dac-8bd9-01c37cf49d4e-kube-api-access-wrmpf\") pod \"openstack-operator-index-zrns9\" (UID: \"de424d05-0977-4dac-8bd9-01c37cf49d4e\") " pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.706830 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2mzkm" event={"ID":"4bd7ec59-a31b-440d-84ae-a52778899e79","Type":"ContainerStarted","Data":"2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd"} Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.706989 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2mzkm" podUID="4bd7ec59-a31b-440d-84ae-a52778899e79" containerName="registry-server" containerID="cri-o://2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd" gracePeriod=2 Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.723934 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2mzkm" podStartSLOduration=1.2980542800000001 podStartE2EDuration="3.723918677s" podCreationTimestamp="2026-02-19 19:36:24 +0000 UTC" firstStartedPulling="2026-02-19 19:36:24.818265557 +0000 UTC m=+1052.608931519" lastFinishedPulling="2026-02-19 19:36:27.244129984 +0000 UTC m=+1055.034795916" observedRunningTime="2026-02-19 19:36:27.720800438 +0000 UTC m=+1055.511466380" watchObservedRunningTime="2026-02-19 19:36:27.723918677 +0000 UTC m=+1055.514584619" Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.765240 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrmpf\" (UniqueName: \"kubernetes.io/projected/de424d05-0977-4dac-8bd9-01c37cf49d4e-kube-api-access-wrmpf\") pod \"openstack-operator-index-zrns9\" (UID: \"de424d05-0977-4dac-8bd9-01c37cf49d4e\") " pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.794756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrmpf\" (UniqueName: \"kubernetes.io/projected/de424d05-0977-4dac-8bd9-01c37cf49d4e-kube-api-access-wrmpf\") pod \"openstack-operator-index-zrns9\" (UID: \"de424d05-0977-4dac-8bd9-01c37cf49d4e\") " pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:27 crc kubenswrapper[4787]: I0219 19:36:27.940952 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.188765 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.373324 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tqz\" (UniqueName: \"kubernetes.io/projected/4bd7ec59-a31b-440d-84ae-a52778899e79-kube-api-access-p5tqz\") pod \"4bd7ec59-a31b-440d-84ae-a52778899e79\" (UID: \"4bd7ec59-a31b-440d-84ae-a52778899e79\") " Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.383327 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zrns9"] Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.384078 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd7ec59-a31b-440d-84ae-a52778899e79-kube-api-access-p5tqz" (OuterVolumeSpecName: "kube-api-access-p5tqz") pod "4bd7ec59-a31b-440d-84ae-a52778899e79" (UID: "4bd7ec59-a31b-440d-84ae-a52778899e79"). InnerVolumeSpecName "kube-api-access-p5tqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.475729 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tqz\" (UniqueName: \"kubernetes.io/projected/4bd7ec59-a31b-440d-84ae-a52778899e79-kube-api-access-p5tqz\") on node \"crc\" DevicePath \"\"" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.716091 4787 generic.go:334] "Generic (PLEG): container finished" podID="4bd7ec59-a31b-440d-84ae-a52778899e79" containerID="2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd" exitCode=0 Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.716178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2mzkm" event={"ID":"4bd7ec59-a31b-440d-84ae-a52778899e79","Type":"ContainerDied","Data":"2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd"} Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.716189 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2mzkm" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.716677 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2mzkm" event={"ID":"4bd7ec59-a31b-440d-84ae-a52778899e79","Type":"ContainerDied","Data":"f97b3af3281cd89667938d92fb4d41270c693c18a72d5ea0a5cc65522305b7e1"} Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.716714 4787 scope.go:117] "RemoveContainer" containerID="2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.718377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zrns9" event={"ID":"de424d05-0977-4dac-8bd9-01c37cf49d4e","Type":"ContainerStarted","Data":"87b580a55802b4d6a23f23558ba03952ab539fba10223b2c78b1b01d2f0d5b57"} Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.718412 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zrns9" event={"ID":"de424d05-0977-4dac-8bd9-01c37cf49d4e","Type":"ContainerStarted","Data":"0c3cec0e6fde5484fa9efd5cbca9205ddbd6ddf5cc516ef578f205979aa89796"} Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.742805 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zrns9" podStartSLOduration=1.696557078 podStartE2EDuration="1.742784203s" podCreationTimestamp="2026-02-19 19:36:27 +0000 UTC" firstStartedPulling="2026-02-19 19:36:28.39241562 +0000 UTC m=+1056.183081562" lastFinishedPulling="2026-02-19 19:36:28.438642745 +0000 UTC m=+1056.229308687" observedRunningTime="2026-02-19 19:36:28.735964208 +0000 UTC m=+1056.526630150" watchObservedRunningTime="2026-02-19 19:36:28.742784203 +0000 UTC m=+1056.533450145" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.745019 4787 scope.go:117] "RemoveContainer" containerID="2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd" Feb 19 19:36:28 crc kubenswrapper[4787]: E0219 19:36:28.745656 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd\": container with ID starting with 2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd not found: ID does not exist" containerID="2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.745714 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd"} err="failed to get container status \"2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd\": rpc error: code = NotFound desc = could not find container \"2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd\": container with ID starting with 2cdb16e8ebb844a04d30f4faf64739909d3834e667a724c4b65b0101cf3a75dd not found: ID does not exist" Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.754946 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2mzkm"] Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.759964 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2mzkm"] Feb 19 19:36:28 crc kubenswrapper[4787]: I0219 19:36:28.904385 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd7ec59-a31b-440d-84ae-a52778899e79" path="/var/lib/kubelet/pods/4bd7ec59-a31b-440d-84ae-a52778899e79/volumes" Feb 19 19:36:29 crc kubenswrapper[4787]: I0219 19:36:29.562152 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" Feb 19 19:36:29 crc kubenswrapper[4787]: I0219 19:36:29.687002 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-9sqnr" Feb 19 19:36:37 crc kubenswrapper[4787]: I0219 19:36:37.942418 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:37 crc kubenswrapper[4787]: I0219 19:36:37.943897 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:37 crc kubenswrapper[4787]: I0219 19:36:37.976517 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:38 crc kubenswrapper[4787]: I0219 19:36:38.858825 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zrns9" Feb 19 19:36:39 crc kubenswrapper[4787]: I0219 19:36:39.263255 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:36:39 crc kubenswrapper[4787]: I0219 19:36:39.263646 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:36:39 crc kubenswrapper[4787]: I0219 19:36:39.569994 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mvnk9" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.210348 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh"] Feb 19 19:36:45 crc kubenswrapper[4787]: E0219 19:36:45.211535 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd7ec59-a31b-440d-84ae-a52778899e79" containerName="registry-server" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.211559 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd7ec59-a31b-440d-84ae-a52778899e79" containerName="registry-server" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.211873 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd7ec59-a31b-440d-84ae-a52778899e79" containerName="registry-server" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.213841 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.215757 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wtlkb" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.221788 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh"] Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.372907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-bundle\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.372973 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-util\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.373020 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4xh\" (UniqueName: \"kubernetes.io/projected/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-kube-api-access-lg4xh\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.474923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg4xh\" (UniqueName: \"kubernetes.io/projected/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-kube-api-access-lg4xh\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.475060 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-bundle\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.475106 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-util\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.475454 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-bundle\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.475471 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-util\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.499643 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg4xh\" (UniqueName: \"kubernetes.io/projected/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-kube-api-access-lg4xh\") pod \"7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.538264 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:45 crc kubenswrapper[4787]: I0219 19:36:45.985541 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh"] Feb 19 19:36:45 crc kubenswrapper[4787]: W0219 19:36:45.998916 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65fb8fa_29f2_4ef8_9d84_221f8e7c354a.slice/crio-c1b8c1a56ae5ddfdd7bd7a16875b57a0471b4874721837eaaf321c0789c5ffd0 WatchSource:0}: Error finding container c1b8c1a56ae5ddfdd7bd7a16875b57a0471b4874721837eaaf321c0789c5ffd0: Status 404 returned error can't find the container with id c1b8c1a56ae5ddfdd7bd7a16875b57a0471b4874721837eaaf321c0789c5ffd0 Feb 19 19:36:46 crc kubenswrapper[4787]: I0219 19:36:46.892296 4787 generic.go:334] "Generic (PLEG): container finished" podID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerID="0366a4758ffc24b2f23782d7e99ca9549cf7e07e01b2320cd656999b693ad762" exitCode=0 Feb 19 19:36:46 crc kubenswrapper[4787]: I0219 19:36:46.904344 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" event={"ID":"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a","Type":"ContainerDied","Data":"0366a4758ffc24b2f23782d7e99ca9549cf7e07e01b2320cd656999b693ad762"} Feb 19 19:36:46 crc kubenswrapper[4787]: I0219 19:36:46.904835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" event={"ID":"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a","Type":"ContainerStarted","Data":"c1b8c1a56ae5ddfdd7bd7a16875b57a0471b4874721837eaaf321c0789c5ffd0"} Feb 19 19:36:47 crc kubenswrapper[4787]: I0219 19:36:47.903435 4787 generic.go:334] "Generic (PLEG): container finished" podID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerID="5cdabc317fb8b50e4f154498a6be1cc1989cb5aa9b62380a4bc2e3a338c4831f" exitCode=0 Feb 19 19:36:47 crc kubenswrapper[4787]: I0219 19:36:47.903509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" event={"ID":"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a","Type":"ContainerDied","Data":"5cdabc317fb8b50e4f154498a6be1cc1989cb5aa9b62380a4bc2e3a338c4831f"} Feb 19 19:36:48 crc kubenswrapper[4787]: I0219 19:36:48.912270 4787 generic.go:334] "Generic (PLEG): container finished" podID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerID="024f1d13084b158f25c31ad3ba7816abea4782d07d8cfe9e496bb4f847209cc4" exitCode=0 Feb 19 19:36:48 crc kubenswrapper[4787]: I0219 19:36:48.912357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" event={"ID":"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a","Type":"ContainerDied","Data":"024f1d13084b158f25c31ad3ba7816abea4782d07d8cfe9e496bb4f847209cc4"} Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.332808 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.458586 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-util\") pod \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.458838 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg4xh\" (UniqueName: \"kubernetes.io/projected/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-kube-api-access-lg4xh\") pod \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.458895 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-bundle\") pod \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\" (UID: \"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a\") " Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.459897 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-bundle" (OuterVolumeSpecName: "bundle") pod "f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" (UID: "f65fb8fa-29f2-4ef8-9d84-221f8e7c354a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.468416 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-kube-api-access-lg4xh" (OuterVolumeSpecName: "kube-api-access-lg4xh") pod "f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" (UID: "f65fb8fa-29f2-4ef8-9d84-221f8e7c354a"). InnerVolumeSpecName "kube-api-access-lg4xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.492891 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-util" (OuterVolumeSpecName: "util") pod "f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" (UID: "f65fb8fa-29f2-4ef8-9d84-221f8e7c354a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.560566 4787 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-util\") on node \"crc\" DevicePath \"\"" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.560620 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg4xh\" (UniqueName: \"kubernetes.io/projected/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-kube-api-access-lg4xh\") on node \"crc\" DevicePath \"\"" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.560636 4787 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f65fb8fa-29f2-4ef8-9d84-221f8e7c354a-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.931148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" event={"ID":"f65fb8fa-29f2-4ef8-9d84-221f8e7c354a","Type":"ContainerDied","Data":"c1b8c1a56ae5ddfdd7bd7a16875b57a0471b4874721837eaaf321c0789c5ffd0"} Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.931200 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b8c1a56ae5ddfdd7bd7a16875b57a0471b4874721837eaaf321c0789c5ffd0" Feb 19 19:36:50 crc kubenswrapper[4787]: I0219 19:36:50.931230 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.148990 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj"] Feb 19 19:36:57 crc kubenswrapper[4787]: E0219 19:36:57.149973 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="util" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.149990 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="util" Feb 19 19:36:57 crc kubenswrapper[4787]: E0219 19:36:57.150016 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="extract" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.150024 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="extract" Feb 19 19:36:57 crc kubenswrapper[4787]: E0219 19:36:57.150058 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="pull" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.150066 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="pull" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.150266 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65fb8fa-29f2-4ef8-9d84-221f8e7c354a" containerName="extract" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.151053 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.153353 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5j64c" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.187620 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj"] Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.271779 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgtjg\" (UniqueName: \"kubernetes.io/projected/5273ac77-af0e-4a20-aa52-708ac057cfdc-kube-api-access-tgtjg\") pod \"openstack-operator-controller-init-b5dd774c6-bjggj\" (UID: \"5273ac77-af0e-4a20-aa52-708ac057cfdc\") " pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.373445 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgtjg\" (UniqueName: \"kubernetes.io/projected/5273ac77-af0e-4a20-aa52-708ac057cfdc-kube-api-access-tgtjg\") pod \"openstack-operator-controller-init-b5dd774c6-bjggj\" (UID: \"5273ac77-af0e-4a20-aa52-708ac057cfdc\") " pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.406061 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgtjg\" (UniqueName: \"kubernetes.io/projected/5273ac77-af0e-4a20-aa52-708ac057cfdc-kube-api-access-tgtjg\") pod \"openstack-operator-controller-init-b5dd774c6-bjggj\" (UID: \"5273ac77-af0e-4a20-aa52-708ac057cfdc\") " pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:36:57 crc kubenswrapper[4787]: I0219 19:36:57.470479 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:36:58 crc kubenswrapper[4787]: I0219 19:36:58.022520 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj"] Feb 19 19:36:58 crc kubenswrapper[4787]: I0219 19:36:58.997504 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" event={"ID":"5273ac77-af0e-4a20-aa52-708ac057cfdc","Type":"ContainerStarted","Data":"36376b437d3355f46422e4b44ad2f24eaf76dc253cf58b9b8ea9648bb19fdfc4"} Feb 19 19:37:02 crc kubenswrapper[4787]: I0219 19:37:02.024385 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" event={"ID":"5273ac77-af0e-4a20-aa52-708ac057cfdc","Type":"ContainerStarted","Data":"31f3aefe06f3a6ba98952a50cf6257ed69310f562d593be9b1827572411d446a"} Feb 19 19:37:02 crc kubenswrapper[4787]: I0219 19:37:02.024931 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:37:02 crc kubenswrapper[4787]: I0219 19:37:02.055116 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" podStartSLOduration=1.5363605420000002 podStartE2EDuration="5.055099347s" podCreationTimestamp="2026-02-19 19:36:57 +0000 UTC" firstStartedPulling="2026-02-19 19:36:58.03218723 +0000 UTC m=+1085.822853172" lastFinishedPulling="2026-02-19 19:37:01.550926035 +0000 UTC m=+1089.341591977" observedRunningTime="2026-02-19 19:37:02.051144943 +0000 UTC m=+1089.841810925" watchObservedRunningTime="2026-02-19 19:37:02.055099347 +0000 UTC m=+1089.845765289" Feb 19 19:37:07 crc kubenswrapper[4787]: I0219 19:37:07.473968 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" Feb 19 19:37:09 crc kubenswrapper[4787]: I0219 19:37:09.263579 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:37:09 crc kubenswrapper[4787]: I0219 19:37:09.263667 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:37:39 crc kubenswrapper[4787]: I0219 19:37:39.263378 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:37:39 crc kubenswrapper[4787]: I0219 19:37:39.263980 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:37:39 crc kubenswrapper[4787]: I0219 19:37:39.264038 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:37:39 crc kubenswrapper[4787]: I0219 19:37:39.264862 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4705fa8f568a3ef2f81b22b29b62816c4b2d13e8cd966ddadc7147e1265dbf66"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:37:39 crc kubenswrapper[4787]: I0219 19:37:39.264951 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://4705fa8f568a3ef2f81b22b29b62816c4b2d13e8cd966ddadc7147e1265dbf66" gracePeriod=600 Feb 19 19:37:40 crc kubenswrapper[4787]: I0219 19:37:40.317138 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="4705fa8f568a3ef2f81b22b29b62816c4b2d13e8cd966ddadc7147e1265dbf66" exitCode=0 Feb 19 19:37:40 crc kubenswrapper[4787]: I0219 19:37:40.317768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"4705fa8f568a3ef2f81b22b29b62816c4b2d13e8cd966ddadc7147e1265dbf66"} Feb 19 19:37:40 crc kubenswrapper[4787]: I0219 19:37:40.317804 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"4c290f7666b81201ba0242964eb17cef06ef0c6b6b9b4a97e80ee9c3f5daac23"} Feb 19 19:37:40 crc kubenswrapper[4787]: I0219 19:37:40.317821 4787 scope.go:117] "RemoveContainer" containerID="908353e6e26c8eb14aa15cfd6585d127a5cf2fd790c45d696549088ebf5dab4a" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.636422 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.638913 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.660407 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jfkk9" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.673115 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.682344 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.684436 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.699034 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ztvzz" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.715580 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.716699 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.719258 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2skrz" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.743784 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.764742 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.765983 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.768049 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-w4rzv" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.782002 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnwpj\" (UniqueName: \"kubernetes.io/projected/b7deddaa-9e2a-4e95-8dce-fb6b70a0523e-kube-api-access-fnwpj\") pod \"barbican-operator-controller-manager-868647ff47-hz9f6\" (UID: \"b7deddaa-9e2a-4e95-8dce-fb6b70a0523e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.782038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvtk\" (UniqueName: \"kubernetes.io/projected/0fdbbc7b-81f4-401b-8df0-59417ab3ec18-kube-api-access-gqvtk\") pod \"cinder-operator-controller-manager-5d946d989d-zvvkw\" (UID: \"0fdbbc7b-81f4-401b-8df0-59417ab3ec18\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.782088 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrfb\" (UniqueName: \"kubernetes.io/projected/39e4daf9-e2ed-4325-9f5f-27b2b5662945-kube-api-access-fsrfb\") pod \"designate-operator-controller-manager-6d8bf5c495-hdncj\" (UID: \"39e4daf9-e2ed-4325-9f5f-27b2b5662945\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.791336 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.809563 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.848092 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-w557k"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.849283 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.854941 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2pz2d" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.870748 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-w557k"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.879908 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.881163 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.884532 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnwpj\" (UniqueName: \"kubernetes.io/projected/b7deddaa-9e2a-4e95-8dce-fb6b70a0523e-kube-api-access-fnwpj\") pod \"barbican-operator-controller-manager-868647ff47-hz9f6\" (UID: \"b7deddaa-9e2a-4e95-8dce-fb6b70a0523e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.884586 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvtk\" (UniqueName: \"kubernetes.io/projected/0fdbbc7b-81f4-401b-8df0-59417ab3ec18-kube-api-access-gqvtk\") pod \"cinder-operator-controller-manager-5d946d989d-zvvkw\" (UID: \"0fdbbc7b-81f4-401b-8df0-59417ab3ec18\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.884657 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrfb\" (UniqueName: \"kubernetes.io/projected/39e4daf9-e2ed-4325-9f5f-27b2b5662945-kube-api-access-fsrfb\") pod \"designate-operator-controller-manager-6d8bf5c495-hdncj\" (UID: \"39e4daf9-e2ed-4325-9f5f-27b2b5662945\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.884687 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhlvq\" (UniqueName: \"kubernetes.io/projected/f58c3336-8153-4c54-95c6-2cf2f23cbe57-kube-api-access-hhlvq\") pod \"glance-operator-controller-manager-77987464f4-tlx7r\" (UID: \"f58c3336-8153-4c54-95c6-2cf2f23cbe57\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.888903 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4bszw" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.915519 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.918448 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrfb\" (UniqueName: \"kubernetes.io/projected/39e4daf9-e2ed-4325-9f5f-27b2b5662945-kube-api-access-fsrfb\") pod \"designate-operator-controller-manager-6d8bf5c495-hdncj\" (UID: \"39e4daf9-e2ed-4325-9f5f-27b2b5662945\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.918505 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvtk\" (UniqueName: \"kubernetes.io/projected/0fdbbc7b-81f4-401b-8df0-59417ab3ec18-kube-api-access-gqvtk\") pod \"cinder-operator-controller-manager-5d946d989d-zvvkw\" (UID: \"0fdbbc7b-81f4-401b-8df0-59417ab3ec18\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.918785 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnwpj\" (UniqueName: \"kubernetes.io/projected/b7deddaa-9e2a-4e95-8dce-fb6b70a0523e-kube-api-access-fnwpj\") pod \"barbican-operator-controller-manager-868647ff47-hz9f6\" (UID: \"b7deddaa-9e2a-4e95-8dce-fb6b70a0523e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.924222 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-j59h4"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.925327 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.929431 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.930278 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-26p45" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.935305 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.936424 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.941411 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ns6xb" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.947694 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.948791 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.951310 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-xtxjd" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.957933 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.959178 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.963626 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p2hf8" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.963870 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-j59h4"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.970898 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.976676 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.984331 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.985623 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.987641 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-czjg9" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.990337 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6"] Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.992157 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhlvq\" (UniqueName: \"kubernetes.io/projected/f58c3336-8153-4c54-95c6-2cf2f23cbe57-kube-api-access-hhlvq\") pod \"glance-operator-controller-manager-77987464f4-tlx7r\" (UID: \"f58c3336-8153-4c54-95c6-2cf2f23cbe57\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.992220 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzcht\" (UniqueName: \"kubernetes.io/projected/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-kube-api-access-lzcht\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.992261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhgd\" (UniqueName: \"kubernetes.io/projected/4a47b4c3-d7f4-4194-bd9c-fdef06d3450d-kube-api-access-5hhgd\") pod \"keystone-operator-controller-manager-b4d948c87-66hj2\" (UID: \"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.992315 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.992362 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dglv9\" (UniqueName: \"kubernetes.io/projected/68b08cc9-812d-4199-8654-9a5a3f2a855f-kube-api-access-dglv9\") pod \"horizon-operator-controller-manager-5b9b8895d5-5k7fl\" (UID: \"68b08cc9-812d-4199-8654-9a5a3f2a855f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.992533 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zpt\" (UniqueName: \"kubernetes.io/projected/9cdc475f-0036-4e63-8fd4-c1e44537668d-kube-api-access-x9zpt\") pod \"heat-operator-controller-manager-69f49c598c-w557k\" (UID: \"9cdc475f-0036-4e63-8fd4-c1e44537668d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:37:47 crc kubenswrapper[4787]: I0219 19:37:47.998049 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.000475 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.010966 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.016918 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.018010 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.022197 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dxxkk" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.025165 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.026295 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhlvq\" (UniqueName: \"kubernetes.io/projected/f58c3336-8153-4c54-95c6-2cf2f23cbe57-kube-api-access-hhlvq\") pod \"glance-operator-controller-manager-77987464f4-tlx7r\" (UID: \"f58c3336-8153-4c54-95c6-2cf2f23cbe57\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.029700 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.033991 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ch2tg" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.055503 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.056016 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.055922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.068438 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.069515 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.071251 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-q2xlb" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.076309 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.085234 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.085562 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.086486 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.090194 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-w25xp" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.091412 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.092761 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.094792 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzcht\" (UniqueName: \"kubernetes.io/projected/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-kube-api-access-lzcht\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.094843 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgks\" (UniqueName: \"kubernetes.io/projected/a752c75e-1e1e-4d78-b82a-95f8df84523f-kube-api-access-sbgks\") pod \"mariadb-operator-controller-manager-6994f66f48-bbnhv\" (UID: \"a752c75e-1e1e-4d78-b82a-95f8df84523f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.094865 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hhgd\" (UniqueName: \"kubernetes.io/projected/4a47b4c3-d7f4-4194-bd9c-fdef06d3450d-kube-api-access-5hhgd\") pod \"keystone-operator-controller-manager-b4d948c87-66hj2\" (UID: \"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.094895 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.094916 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2mr\" (UniqueName: \"kubernetes.io/projected/46df12dd-6fd4-4508-8141-ef1cc6551d79-kube-api-access-tg2mr\") pod \"manila-operator-controller-manager-54f6768c69-77xc6\" (UID: \"46df12dd-6fd4-4508-8141-ef1cc6551d79\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.094941 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dglv9\" (UniqueName: \"kubernetes.io/projected/68b08cc9-812d-4199-8654-9a5a3f2a855f-kube-api-access-dglv9\") pod \"horizon-operator-controller-manager-5b9b8895d5-5k7fl\" (UID: \"68b08cc9-812d-4199-8654-9a5a3f2a855f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.095012 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twqqx\" (UniqueName: \"kubernetes.io/projected/285a6f28-aeac-4b0d-816a-2eb05abe7ef3-kube-api-access-twqqx\") pod \"nova-operator-controller-manager-567668f5cf-whv4k\" (UID: \"285a6f28-aeac-4b0d-816a-2eb05abe7ef3\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.095041 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zpt\" (UniqueName: \"kubernetes.io/projected/9cdc475f-0036-4e63-8fd4-c1e44537668d-kube-api-access-x9zpt\") pod \"heat-operator-controller-manager-69f49c598c-w557k\" (UID: \"9cdc475f-0036-4e63-8fd4-c1e44537668d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.095089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9tqd\" (UniqueName: \"kubernetes.io/projected/00ef1a7b-bf28-4126-b60f-c79af3fde4da-kube-api-access-d9tqd\") pod \"neutron-operator-controller-manager-64ddbf8bb-9p6x4\" (UID: \"00ef1a7b-bf28-4126-b60f-c79af3fde4da\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.095109 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqs65\" (UniqueName: \"kubernetes.io/projected/6e92b566-c5a6-40e8-be75-5de416385888-kube-api-access-zqs65\") pod \"ironic-operator-controller-manager-554564d7fc-5g7hg\" (UID: \"6e92b566-c5a6-40e8-be75-5de416385888\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.095621 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.095669 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert podName:06aa3b20-a2ee-4c2b-bda6-0e876910a26c nodeName:}" failed. No retries permitted until 2026-02-19 19:37:48.595650225 +0000 UTC m=+1136.386316177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert") pod "infra-operator-controller-manager-79d975b745-j59h4" (UID: "06aa3b20-a2ee-4c2b-bda6-0e876910a26c") : secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.100850 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.101774 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-89gf7" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.102950 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.143702 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.157105 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5f9zd" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.162015 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hhgd\" (UniqueName: \"kubernetes.io/projected/4a47b4c3-d7f4-4194-bd9c-fdef06d3450d-kube-api-access-5hhgd\") pod \"keystone-operator-controller-manager-b4d948c87-66hj2\" (UID: \"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.162741 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dglv9\" (UniqueName: \"kubernetes.io/projected/68b08cc9-812d-4199-8654-9a5a3f2a855f-kube-api-access-dglv9\") pod \"horizon-operator-controller-manager-5b9b8895d5-5k7fl\" (UID: \"68b08cc9-812d-4199-8654-9a5a3f2a855f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.193939 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zpt\" (UniqueName: \"kubernetes.io/projected/9cdc475f-0036-4e63-8fd4-c1e44537668d-kube-api-access-x9zpt\") pod \"heat-operator-controller-manager-69f49c598c-w557k\" (UID: \"9cdc475f-0036-4e63-8fd4-c1e44537668d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.199800 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzcht\" (UniqueName: \"kubernetes.io/projected/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-kube-api-access-lzcht\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.203999 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/26a6b075-ab07-4508-86f7-2af4934e078a-kube-api-access-9vkc6\") pod \"octavia-operator-controller-manager-69f8888797-88xzz\" (UID: \"26a6b075-ab07-4508-86f7-2af4934e078a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.204093 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twqqx\" (UniqueName: \"kubernetes.io/projected/285a6f28-aeac-4b0d-816a-2eb05abe7ef3-kube-api-access-twqqx\") pod \"nova-operator-controller-manager-567668f5cf-whv4k\" (UID: \"285a6f28-aeac-4b0d-816a-2eb05abe7ef3\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.204219 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv87f\" (UniqueName: \"kubernetes.io/projected/c0ee76ae-6d9e-4470-8f77-27d7d231bb7d-kube-api-access-jv87f\") pod \"placement-operator-controller-manager-8497b45c89-rv4sk\" (UID: \"c0ee76ae-6d9e-4470-8f77-27d7d231bb7d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.204935 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s4tf\" (UniqueName: \"kubernetes.io/projected/880dd943-ce91-4373-ab8a-fd5df0a44e2a-kube-api-access-2s4tf\") pod \"ovn-operator-controller-manager-d44cf6b75-j2ktf\" (UID: \"880dd943-ce91-4373-ab8a-fd5df0a44e2a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.204971 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9tqd\" (UniqueName: \"kubernetes.io/projected/00ef1a7b-bf28-4126-b60f-c79af3fde4da-kube-api-access-d9tqd\") pod \"neutron-operator-controller-manager-64ddbf8bb-9p6x4\" (UID: \"00ef1a7b-bf28-4126-b60f-c79af3fde4da\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.204993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqs65\" (UniqueName: \"kubernetes.io/projected/6e92b566-c5a6-40e8-be75-5de416385888-kube-api-access-zqs65\") pod \"ironic-operator-controller-manager-554564d7fc-5g7hg\" (UID: \"6e92b566-c5a6-40e8-be75-5de416385888\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.205060 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbgks\" (UniqueName: \"kubernetes.io/projected/a752c75e-1e1e-4d78-b82a-95f8df84523f-kube-api-access-sbgks\") pod \"mariadb-operator-controller-manager-6994f66f48-bbnhv\" (UID: \"a752c75e-1e1e-4d78-b82a-95f8df84523f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.205187 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2mr\" (UniqueName: \"kubernetes.io/projected/46df12dd-6fd4-4508-8141-ef1cc6551d79-kube-api-access-tg2mr\") pod \"manila-operator-controller-manager-54f6768c69-77xc6\" (UID: \"46df12dd-6fd4-4508-8141-ef1cc6551d79\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.205238 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.205256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djb6z\" (UniqueName: \"kubernetes.io/projected/78a45da3-619d-4cc4-a819-6dad66a61737-kube-api-access-djb6z\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.208903 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.221683 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.240539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9tqd\" (UniqueName: \"kubernetes.io/projected/00ef1a7b-bf28-4126-b60f-c79af3fde4da-kube-api-access-d9tqd\") pod \"neutron-operator-controller-manager-64ddbf8bb-9p6x4\" (UID: \"00ef1a7b-bf28-4126-b60f-c79af3fde4da\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.242725 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2mr\" (UniqueName: \"kubernetes.io/projected/46df12dd-6fd4-4508-8141-ef1cc6551d79-kube-api-access-tg2mr\") pod \"manila-operator-controller-manager-54f6768c69-77xc6\" (UID: \"46df12dd-6fd4-4508-8141-ef1cc6551d79\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.244139 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twqqx\" (UniqueName: \"kubernetes.io/projected/285a6f28-aeac-4b0d-816a-2eb05abe7ef3-kube-api-access-twqqx\") pod \"nova-operator-controller-manager-567668f5cf-whv4k\" (UID: \"285a6f28-aeac-4b0d-816a-2eb05abe7ef3\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.244159 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbgks\" (UniqueName: \"kubernetes.io/projected/a752c75e-1e1e-4d78-b82a-95f8df84523f-kube-api-access-sbgks\") pod \"mariadb-operator-controller-manager-6994f66f48-bbnhv\" (UID: \"a752c75e-1e1e-4d78-b82a-95f8df84523f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.257872 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqs65\" (UniqueName: \"kubernetes.io/projected/6e92b566-c5a6-40e8-be75-5de416385888-kube-api-access-zqs65\") pod \"ironic-operator-controller-manager-554564d7fc-5g7hg\" (UID: \"6e92b566-c5a6-40e8-be75-5de416385888\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.268208 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.268412 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.276329 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.284218 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.296155 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q46b7"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.297166 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.303426 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-d6rln" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.310221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.310286 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djb6z\" (UniqueName: \"kubernetes.io/projected/78a45da3-619d-4cc4-a819-6dad66a61737-kube-api-access-djb6z\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.310337 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/26a6b075-ab07-4508-86f7-2af4934e078a-kube-api-access-9vkc6\") pod \"octavia-operator-controller-manager-69f8888797-88xzz\" (UID: \"26a6b075-ab07-4508-86f7-2af4934e078a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.310425 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv87f\" (UniqueName: \"kubernetes.io/projected/c0ee76ae-6d9e-4470-8f77-27d7d231bb7d-kube-api-access-jv87f\") pod \"placement-operator-controller-manager-8497b45c89-rv4sk\" (UID: \"c0ee76ae-6d9e-4470-8f77-27d7d231bb7d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.310442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s4tf\" (UniqueName: \"kubernetes.io/projected/880dd943-ce91-4373-ab8a-fd5df0a44e2a-kube-api-access-2s4tf\") pod \"ovn-operator-controller-manager-d44cf6b75-j2ktf\" (UID: \"880dd943-ce91-4373-ab8a-fd5df0a44e2a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.310864 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.310910 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert podName:78a45da3-619d-4cc4-a819-6dad66a61737 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:48.810895411 +0000 UTC m=+1136.601561353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" (UID: "78a45da3-619d-4cc4-a819-6dad66a61737") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.326963 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.328385 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q46b7"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.331795 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv87f\" (UniqueName: \"kubernetes.io/projected/c0ee76ae-6d9e-4470-8f77-27d7d231bb7d-kube-api-access-jv87f\") pod \"placement-operator-controller-manager-8497b45c89-rv4sk\" (UID: \"c0ee76ae-6d9e-4470-8f77-27d7d231bb7d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.334516 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/26a6b075-ab07-4508-86f7-2af4934e078a-kube-api-access-9vkc6\") pod \"octavia-operator-controller-manager-69f8888797-88xzz\" (UID: \"26a6b075-ab07-4508-86f7-2af4934e078a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.335301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djb6z\" (UniqueName: \"kubernetes.io/projected/78a45da3-619d-4cc4-a819-6dad66a61737-kube-api-access-djb6z\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.340317 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s4tf\" (UniqueName: \"kubernetes.io/projected/880dd943-ce91-4373-ab8a-fd5df0a44e2a-kube-api-access-2s4tf\") pod \"ovn-operator-controller-manager-d44cf6b75-j2ktf\" (UID: \"880dd943-ce91-4373-ab8a-fd5df0a44e2a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.376753 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.382836 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.386826 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hhcrm" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.402282 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.412754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzp4\" (UniqueName: \"kubernetes.io/projected/0944c0f9-ef54-46cc-be37-a59477312705-kube-api-access-rfzp4\") pod \"swift-operator-controller-manager-68f46476f-q46b7\" (UID: \"0944c0f9-ef54-46cc-be37-a59477312705\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.437369 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.473864 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-x8ltf"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.476148 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.482829 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-2fskf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.488575 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.494320 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.504042 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-x8ltf"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.514979 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrmqz\" (UniqueName: \"kubernetes.io/projected/90a8e9a7-b3db-4b64-bde8-569c3e8485d5-kube-api-access-zrmqz\") pod \"telemetry-operator-controller-manager-58b878c868-9zgl9\" (UID: \"90a8e9a7-b3db-4b64-bde8-569c3e8485d5\") " pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.515348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzp4\" (UniqueName: \"kubernetes.io/projected/0944c0f9-ef54-46cc-be37-a59477312705-kube-api-access-rfzp4\") pod \"swift-operator-controller-manager-68f46476f-q46b7\" (UID: \"0944c0f9-ef54-46cc-be37-a59477312705\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.517638 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.518982 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.521179 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wgqpr" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.522682 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.538023 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzp4\" (UniqueName: \"kubernetes.io/projected/0944c0f9-ef54-46cc-be37-a59477312705-kube-api-access-rfzp4\") pod \"swift-operator-controller-manager-68f46476f-q46b7\" (UID: \"0944c0f9-ef54-46cc-be37-a59477312705\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.542543 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.545732 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.596102 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.601929 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.603108 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.605298 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.605927 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.607285 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xrfzn" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.613336 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.616553 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.616677 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.616800 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.616864 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert podName:06aa3b20-a2ee-4c2b-bda6-0e876910a26c nodeName:}" failed. No retries permitted until 2026-02-19 19:37:49.616849749 +0000 UTC m=+1137.407515691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert") pod "infra-operator-controller-manager-79d975b745-j59h4" (UID: "06aa3b20-a2ee-4c2b-bda6-0e876910a26c") : secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.617222 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrmqz\" (UniqueName: \"kubernetes.io/projected/90a8e9a7-b3db-4b64-bde8-569c3e8485d5-kube-api-access-zrmqz\") pod \"telemetry-operator-controller-manager-58b878c868-9zgl9\" (UID: \"90a8e9a7-b3db-4b64-bde8-569c3e8485d5\") " pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.617265 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq2lz\" (UniqueName: \"kubernetes.io/projected/bfb600b1-766e-4df0-9f20-a5b4ad0ed684-kube-api-access-fq2lz\") pod \"watcher-operator-controller-manager-5db88f68c-s78xt\" (UID: \"bfb600b1-766e-4df0-9f20-a5b4ad0ed684\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.617299 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvswk\" (UniqueName: \"kubernetes.io/projected/c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4-kube-api-access-gvswk\") pod \"test-operator-controller-manager-7866795846-x8ltf\" (UID: \"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4\") " pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.636125 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrmqz\" (UniqueName: \"kubernetes.io/projected/90a8e9a7-b3db-4b64-bde8-569c3e8485d5-kube-api-access-zrmqz\") pod \"telemetry-operator-controller-manager-58b878c868-9zgl9\" (UID: \"90a8e9a7-b3db-4b64-bde8-569c3e8485d5\") " pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.641166 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.643054 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.692926 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.694195 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-2dnw9" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.722322 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswdm\" (UniqueName: \"kubernetes.io/projected/36c1d908-879d-4d98-bd71-06b5c6e802e8-kube-api-access-fswdm\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.722877 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.723019 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq2lz\" (UniqueName: \"kubernetes.io/projected/bfb600b1-766e-4df0-9f20-a5b4ad0ed684-kube-api-access-fq2lz\") pod \"watcher-operator-controller-manager-5db88f68c-s78xt\" (UID: \"bfb600b1-766e-4df0-9f20-a5b4ad0ed684\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.723075 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvswk\" (UniqueName: \"kubernetes.io/projected/c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4-kube-api-access-gvswk\") pod \"test-operator-controller-manager-7866795846-x8ltf\" (UID: \"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4\") " pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.723151 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.725342 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.742871 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.745748 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq2lz\" (UniqueName: \"kubernetes.io/projected/bfb600b1-766e-4df0-9f20-a5b4ad0ed684-kube-api-access-fq2lz\") pod \"watcher-operator-controller-manager-5db88f68c-s78xt\" (UID: \"bfb600b1-766e-4df0-9f20-a5b4ad0ed684\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.755301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvswk\" (UniqueName: \"kubernetes.io/projected/c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4-kube-api-access-gvswk\") pod \"test-operator-controller-manager-7866795846-x8ltf\" (UID: \"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4\") " pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.802732 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.824850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.824903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.824933 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wn2\" (UniqueName: \"kubernetes.io/projected/250d2041-efa0-41fb-8b0e-e99ba8c1c14c-kube-api-access-m5wn2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phlkl\" (UID: \"250d2041-efa0-41fb-8b0e-e99ba8c1c14c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.825009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.825051 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswdm\" (UniqueName: \"kubernetes.io/projected/36c1d908-879d-4d98-bd71-06b5c6e802e8-kube-api-access-fswdm\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.825468 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.825511 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert podName:78a45da3-619d-4cc4-a819-6dad66a61737 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:49.825496635 +0000 UTC m=+1137.616162577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" (UID: "78a45da3-619d-4cc4-a819-6dad66a61737") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.825875 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.825901 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:49.325892567 +0000 UTC m=+1137.116558509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "metrics-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.825987 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: E0219 19:37:48.826010 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:49.32600337 +0000 UTC m=+1137.116669312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "webhook-server-cert" not found Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.862033 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.874409 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswdm\" (UniqueName: \"kubernetes.io/projected/36c1d908-879d-4d98-bd71-06b5c6e802e8-kube-api-access-fswdm\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.916878 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.925160 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.926295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5wn2\" (UniqueName: \"kubernetes.io/projected/250d2041-efa0-41fb-8b0e-e99ba8c1c14c-kube-api-access-m5wn2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phlkl\" (UID: \"250d2041-efa0-41fb-8b0e-e99ba8c1c14c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.933389 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw"] Feb 19 19:37:48 crc kubenswrapper[4787]: I0219 19:37:48.964025 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5wn2\" (UniqueName: \"kubernetes.io/projected/250d2041-efa0-41fb-8b0e-e99ba8c1c14c-kube-api-access-m5wn2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-phlkl\" (UID: \"250d2041-efa0-41fb-8b0e-e99ba8c1c14c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.033052 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.287416 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.298958 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2"] Feb 19 19:37:49 crc kubenswrapper[4787]: W0219 19:37:49.307370 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a47b4c3_d7f4_4194_bd9c_fdef06d3450d.slice/crio-a565b3a5f811f8edcf8155e4be2005de2afefbaff8a80c38cf6fceb15909d0a1 WatchSource:0}: Error finding container a565b3a5f811f8edcf8155e4be2005de2afefbaff8a80c38cf6fceb15909d0a1: Status 404 returned error can't find the container with id a565b3a5f811f8edcf8155e4be2005de2afefbaff8a80c38cf6fceb15909d0a1 Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.334450 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.334568 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.334673 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.334726 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:50.334711064 +0000 UTC m=+1138.125377016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "metrics-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.335042 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.335098 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:50.335088135 +0000 UTC m=+1138.125754087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "webhook-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.423327 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" event={"ID":"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d","Type":"ContainerStarted","Data":"a565b3a5f811f8edcf8155e4be2005de2afefbaff8a80c38cf6fceb15909d0a1"} Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.424488 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" event={"ID":"b7deddaa-9e2a-4e95-8dce-fb6b70a0523e","Type":"ContainerStarted","Data":"a4efaf85f573a3af7fe524198ff91d83c19957eac6fec4a70b12de68c406bfcc"} Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.425490 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" event={"ID":"39e4daf9-e2ed-4325-9f5f-27b2b5662945","Type":"ContainerStarted","Data":"111bc87328083911312a5945c4d13d6ee592788b32737459c68e2bbe6d85b5c2"} Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.426528 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" event={"ID":"c0ee76ae-6d9e-4470-8f77-27d7d231bb7d","Type":"ContainerStarted","Data":"8e6cfbebb53aca31e9fe820fdafed678ad8d5a18e944721cdef87992deb404fe"} Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.427491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" event={"ID":"0fdbbc7b-81f4-401b-8df0-59417ab3ec18","Type":"ContainerStarted","Data":"ca5a8d96731e0f2d0e10b3f7cb478ee2eb474e2d3cad5b062e61859e16b094ab"} Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.490763 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r"] Feb 19 19:37:49 crc kubenswrapper[4787]: W0219 19:37:49.494530 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf58c3336_8153_4c54_95c6_2cf2f23cbe57.slice/crio-925f2a9b2e0ff0e67dfe50adb75f1e443db778b4ba646da870a28a120b53f422 WatchSource:0}: Error finding container 925f2a9b2e0ff0e67dfe50adb75f1e443db778b4ba646da870a28a120b53f422: Status 404 returned error can't find the container with id 925f2a9b2e0ff0e67dfe50adb75f1e443db778b4ba646da870a28a120b53f422 Feb 19 19:37:49 crc kubenswrapper[4787]: W0219 19:37:49.498991 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b08cc9_812d_4199_8654_9a5a3f2a855f.slice/crio-b295efb58c72c972ad095fdc12e749feb982a1d5233005959d82d1b4bdd97efa WatchSource:0}: Error finding container b295efb58c72c972ad095fdc12e749feb982a1d5233005959d82d1b4bdd97efa: Status 404 returned error can't find the container with id b295efb58c72c972ad095fdc12e749feb982a1d5233005959d82d1b4bdd97efa Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.499022 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.640816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.641072 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.641149 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert podName:06aa3b20-a2ee-4c2b-bda6-0e876910a26c nodeName:}" failed. No retries permitted until 2026-02-19 19:37:51.641134195 +0000 UTC m=+1139.431800127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert") pod "infra-operator-controller-manager-79d975b745-j59h4" (UID: "06aa3b20-a2ee-4c2b-bda6-0e876910a26c") : secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.704115 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.718072 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.727658 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.738026 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.747858 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-w557k"] Feb 19 19:37:49 crc kubenswrapper[4787]: I0219 19:37:49.843805 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.844031 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:49 crc kubenswrapper[4787]: E0219 19:37:49.844132 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert podName:78a45da3-619d-4cc4-a819-6dad66a61737 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:51.844113218 +0000 UTC m=+1139.634779160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" (UID: "78a45da3-619d-4cc4-a819-6dad66a61737") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.317029 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf"] Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.355226 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.355328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.355494 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.355538 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:52.35552471 +0000 UTC m=+1140.146190652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "webhook-server-cert" not found Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.355881 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.355907 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:52.355899171 +0000 UTC m=+1140.146565103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "metrics-server-cert" not found Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.355928 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl"] Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.368652 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt"] Feb 19 19:37:50 crc kubenswrapper[4787]: W0219 19:37:50.370564 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250d2041_efa0_41fb_8b0e_e99ba8c1c14c.slice/crio-0c08912c5aae7d98013645d5170a9c5f37f164b66d6c03360d5ba5a742d67999 WatchSource:0}: Error finding container 0c08912c5aae7d98013645d5170a9c5f37f164b66d6c03360d5ba5a742d67999: Status 404 returned error can't find the container with id 0c08912c5aae7d98013645d5170a9c5f37f164b66d6c03360d5ba5a742d67999 Feb 19 19:37:50 crc kubenswrapper[4787]: W0219 19:37:50.373775 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod880dd943_ce91_4373_ab8a_fd5df0a44e2a.slice/crio-504ef90592070e1519a20b3591ea5dcb2609a28ba60cbaf8c7e6b3f46293d91d WatchSource:0}: Error finding container 504ef90592070e1519a20b3591ea5dcb2609a28ba60cbaf8c7e6b3f46293d91d: Status 404 returned error can't find the container with id 504ef90592070e1519a20b3591ea5dcb2609a28ba60cbaf8c7e6b3f46293d91d Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.379446 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9"] Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.388941 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-x8ltf"] Feb 19 19:37:50 crc kubenswrapper[4787]: W0219 19:37:50.404823 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e6fa36_c4ad_47d6_9ccb_9fc66b1038a4.slice/crio-b82762330dc7aaac1d6ca12131e483fbb29a369ee138a25b5f1c606f1c741903 WatchSource:0}: Error finding container b82762330dc7aaac1d6ca12131e483fbb29a369ee138a25b5f1c606f1c741903: Status 404 returned error can't find the container with id b82762330dc7aaac1d6ca12131e483fbb29a369ee138a25b5f1c606f1c741903 Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.405703 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6"] Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.415140 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q46b7"] Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.423707 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz"] Feb 19 19:37:50 crc kubenswrapper[4787]: W0219 19:37:50.487628 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb600b1_766e_4df0_9f20_a5b4ad0ed684.slice/crio-37a3d370b7c39e8bbdb5c6e4fce890b028e30edec8365a58f83b841224bbf9a4 WatchSource:0}: Error finding container 37a3d370b7c39e8bbdb5c6e4fce890b028e30edec8365a58f83b841224bbf9a4: Status 404 returned error can't find the container with id 37a3d370b7c39e8bbdb5c6e4fce890b028e30edec8365a58f83b841224bbf9a4 Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.502102 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" event={"ID":"a752c75e-1e1e-4d78-b82a-95f8df84523f","Type":"ContainerStarted","Data":"0874fcf2b20e22834bc715326ce22e53a0890b8c4a9afef680e785f18588a7d5"} Feb 19 19:37:50 crc kubenswrapper[4787]: W0219 19:37:50.504662 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46df12dd_6fd4_4508_8141_ef1cc6551d79.slice/crio-eecb132fd5b19257fdd6a905b61e205f7fbd253cd9d7b95a502a7e6519adc776 WatchSource:0}: Error finding container eecb132fd5b19257fdd6a905b61e205f7fbd253cd9d7b95a502a7e6519adc776: Status 404 returned error can't find the container with id eecb132fd5b19257fdd6a905b61e205f7fbd253cd9d7b95a502a7e6519adc776 Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.508584 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" event={"ID":"68b08cc9-812d-4199-8654-9a5a3f2a855f","Type":"ContainerStarted","Data":"b295efb58c72c972ad095fdc12e749feb982a1d5233005959d82d1b4bdd97efa"} Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.510027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" event={"ID":"6e92b566-c5a6-40e8-be75-5de416385888","Type":"ContainerStarted","Data":"b42f5bdb7d9e77c3cb35774682ed2da11c44d82f0561549bd798b58ca6c59c4a"} Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.511492 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" event={"ID":"f58c3336-8153-4c54-95c6-2cf2f23cbe57","Type":"ContainerStarted","Data":"925f2a9b2e0ff0e67dfe50adb75f1e443db778b4ba646da870a28a120b53f422"} Feb 19 19:37:50 crc kubenswrapper[4787]: W0219 19:37:50.514707 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0944c0f9_ef54_46cc_be37_a59477312705.slice/crio-66a43d7a0f2ac9ddd8c0d272ee5a736c9ec4cda1d6e75692a0c4066f8846134a WatchSource:0}: Error finding container 66a43d7a0f2ac9ddd8c0d272ee5a736c9ec4cda1d6e75692a0c4066f8846134a: Status 404 returned error can't find the container with id 66a43d7a0f2ac9ddd8c0d272ee5a736c9ec4cda1d6e75692a0c4066f8846134a Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.516034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" event={"ID":"9cdc475f-0036-4e63-8fd4-c1e44537668d","Type":"ContainerStarted","Data":"b3340fdd15d8be9812d4f0d23c39d713396930862e5bb273f5fb3346a7dc50ea"} Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.517289 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zrmqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58b878c868-9zgl9_openstack-operators(90a8e9a7-b3db-4b64-bde8-569c3e8485d5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.519486 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" podUID="90a8e9a7-b3db-4b64-bde8-569c3e8485d5" Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.520507 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfzp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-q46b7_openstack-operators(0944c0f9-ef54-46cc-be37-a59477312705): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.520743 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" event={"ID":"00ef1a7b-bf28-4126-b60f-c79af3fde4da","Type":"ContainerStarted","Data":"f74a3ceb41d03670179790d8b050bc32fcec0eb6bdbe31ea7820fc315064429b"} Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.520930 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tg2mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-77xc6_openstack-operators(46df12dd-6fd4-4508-8141-ef1cc6551d79): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.522193 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podUID="46df12dd-6fd4-4508-8141-ef1cc6551d79" Feb 19 19:37:50 crc kubenswrapper[4787]: E0219 19:37:50.522198 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podUID="0944c0f9-ef54-46cc-be37-a59477312705" Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.527977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" event={"ID":"285a6f28-aeac-4b0d-816a-2eb05abe7ef3","Type":"ContainerStarted","Data":"7fe62902284fd3cfb647ac22377e610ea48e4512f76dd3643ae285c19f3a9c1a"} Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.536451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" event={"ID":"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4","Type":"ContainerStarted","Data":"b82762330dc7aaac1d6ca12131e483fbb29a369ee138a25b5f1c606f1c741903"} Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.538873 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" event={"ID":"250d2041-efa0-41fb-8b0e-e99ba8c1c14c","Type":"ContainerStarted","Data":"0c08912c5aae7d98013645d5170a9c5f37f164b66d6c03360d5ba5a742d67999"} Feb 19 19:37:50 crc kubenswrapper[4787]: I0219 19:37:50.546551 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" event={"ID":"880dd943-ce91-4373-ab8a-fd5df0a44e2a","Type":"ContainerStarted","Data":"504ef90592070e1519a20b3591ea5dcb2609a28ba60cbaf8c7e6b3f46293d91d"} Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.565883 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" event={"ID":"26a6b075-ab07-4508-86f7-2af4934e078a","Type":"ContainerStarted","Data":"16507b7b19a5b4e9dd196d59d3caf7faa2fe22c7374ee057fa7d622cb2b60ab4"} Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.568768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" event={"ID":"46df12dd-6fd4-4508-8141-ef1cc6551d79","Type":"ContainerStarted","Data":"eecb132fd5b19257fdd6a905b61e205f7fbd253cd9d7b95a502a7e6519adc776"} Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.570834 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podUID="46df12dd-6fd4-4508-8141-ef1cc6551d79" Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.571161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" event={"ID":"0944c0f9-ef54-46cc-be37-a59477312705","Type":"ContainerStarted","Data":"66a43d7a0f2ac9ddd8c0d272ee5a736c9ec4cda1d6e75692a0c4066f8846134a"} Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.574212 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podUID="0944c0f9-ef54-46cc-be37-a59477312705" Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.591136 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" event={"ID":"90a8e9a7-b3db-4b64-bde8-569c3e8485d5","Type":"ContainerStarted","Data":"fe3df239841cd207fae7ab1cc124ca75956671a106c2ef738c338db9e27878f8"} Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.597809 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" podUID="90a8e9a7-b3db-4b64-bde8-569c3e8485d5" Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.598890 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" event={"ID":"bfb600b1-766e-4df0-9f20-a5b4ad0ed684","Type":"ContainerStarted","Data":"37a3d370b7c39e8bbdb5c6e4fce890b028e30edec8365a58f83b841224bbf9a4"} Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.710828 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.713716 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.713889 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert podName:06aa3b20-a2ee-4c2b-bda6-0e876910a26c nodeName:}" failed. No retries permitted until 2026-02-19 19:37:55.71376151 +0000 UTC m=+1143.504427452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert") pod "infra-operator-controller-manager-79d975b745-j59h4" (UID: "06aa3b20-a2ee-4c2b-bda6-0e876910a26c") : secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:51 crc kubenswrapper[4787]: I0219 19:37:51.914672 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.916552 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:51 crc kubenswrapper[4787]: E0219 19:37:51.916661 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert podName:78a45da3-619d-4cc4-a819-6dad66a61737 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:55.916640621 +0000 UTC m=+1143.707306623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" (UID: "78a45da3-619d-4cc4-a819-6dad66a61737") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:52 crc kubenswrapper[4787]: I0219 19:37:52.423327 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.423503 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 19:37:52 crc kubenswrapper[4787]: I0219 19:37:52.423807 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.423836 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:56.423815271 +0000 UTC m=+1144.214481223 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "webhook-server-cert" not found Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.423944 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.423981 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:37:56.423972726 +0000 UTC m=+1144.214638668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "metrics-server-cert" not found Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.621674 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podUID="46df12dd-6fd4-4508-8141-ef1cc6551d79" Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.625189 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podUID="0944c0f9-ef54-46cc-be37-a59477312705" Feb 19 19:37:52 crc kubenswrapper[4787]: E0219 19:37:52.629651 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.64:5001/openstack-k8s-operators/telemetry-operator:b83e41b73854b29b5d6860a1d9e9ac7611640781\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" podUID="90a8e9a7-b3db-4b64-bde8-569c3e8485d5" Feb 19 19:37:55 crc kubenswrapper[4787]: I0219 19:37:55.796302 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:37:55 crc kubenswrapper[4787]: E0219 19:37:55.796454 4787 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:55 crc kubenswrapper[4787]: E0219 19:37:55.796982 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert podName:06aa3b20-a2ee-4c2b-bda6-0e876910a26c nodeName:}" failed. No retries permitted until 2026-02-19 19:38:03.796961795 +0000 UTC m=+1151.587627737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert") pod "infra-operator-controller-manager-79d975b745-j59h4" (UID: "06aa3b20-a2ee-4c2b-bda6-0e876910a26c") : secret "infra-operator-webhook-server-cert" not found Feb 19 19:37:56 crc kubenswrapper[4787]: I0219 19:37:56.001668 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:37:56 crc kubenswrapper[4787]: E0219 19:37:56.001745 4787 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:56 crc kubenswrapper[4787]: E0219 19:37:56.001782 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert podName:78a45da3-619d-4cc4-a819-6dad66a61737 nodeName:}" failed. No retries permitted until 2026-02-19 19:38:04.001770121 +0000 UTC m=+1151.792436063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" (UID: "78a45da3-619d-4cc4-a819-6dad66a61737") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 19:37:56 crc kubenswrapper[4787]: I0219 19:37:56.511027 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:56 crc kubenswrapper[4787]: I0219 19:37:56.511145 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:37:56 crc kubenswrapper[4787]: E0219 19:37:56.511300 4787 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 19:37:56 crc kubenswrapper[4787]: E0219 19:37:56.511355 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:38:04.51134168 +0000 UTC m=+1152.302007612 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "webhook-server-cert" not found Feb 19 19:37:56 crc kubenswrapper[4787]: E0219 19:37:56.512028 4787 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 19:37:56 crc kubenswrapper[4787]: E0219 19:37:56.512135 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs podName:36c1d908-879d-4d98-bd71-06b5c6e802e8 nodeName:}" failed. No retries permitted until 2026-02-19 19:38:04.512115703 +0000 UTC m=+1152.302781655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs") pod "openstack-operator-controller-manager-b9547c8c7-s8zzh" (UID: "36c1d908-879d-4d98-bd71-06b5c6e802e8") : secret "metrics-server-cert" not found Feb 19 19:38:03 crc kubenswrapper[4787]: I0219 19:38:03.846403 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:38:03 crc kubenswrapper[4787]: I0219 19:38:03.856506 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06aa3b20-a2ee-4c2b-bda6-0e876910a26c-cert\") pod \"infra-operator-controller-manager-79d975b745-j59h4\" (UID: \"06aa3b20-a2ee-4c2b-bda6-0e876910a26c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:38:03 crc kubenswrapper[4787]: I0219 19:38:03.904039 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-26p45" Feb 19 19:38:03 crc kubenswrapper[4787]: I0219 19:38:03.909385 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.050049 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.053763 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78a45da3-619d-4cc4-a819-6dad66a61737-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc\" (UID: \"78a45da3-619d-4cc4-a819-6dad66a61737\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.260231 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-89gf7" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.269085 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.560400 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.560501 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.564647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-webhook-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.566041 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/36c1d908-879d-4d98-bd71-06b5c6e802e8-metrics-certs\") pod \"openstack-operator-controller-manager-b9547c8c7-s8zzh\" (UID: \"36c1d908-879d-4d98-bd71-06b5c6e802e8\") " pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.617137 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-xrfzn" Feb 19 19:38:04 crc kubenswrapper[4787]: I0219 19:38:04.624745 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:07 crc kubenswrapper[4787]: E0219 19:38:07.716250 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 19:38:07 crc kubenswrapper[4787]: E0219 19:38:07.717099 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twqqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-whv4k_openstack-operators(285a6f28-aeac-4b0d-816a-2eb05abe7ef3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:07 crc kubenswrapper[4787]: E0219 19:38:07.719034 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" podUID="285a6f28-aeac-4b0d-816a-2eb05abe7ef3" Feb 19 19:38:07 crc kubenswrapper[4787]: E0219 19:38:07.761163 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" podUID="285a6f28-aeac-4b0d-816a-2eb05abe7ef3" Feb 19 19:38:08 crc kubenswrapper[4787]: E0219 19:38:08.241135 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 19 19:38:08 crc kubenswrapper[4787]: E0219 19:38:08.241318 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9zpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-w557k_openstack-operators(9cdc475f-0036-4e63-8fd4-c1e44537668d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:08 crc kubenswrapper[4787]: E0219 19:38:08.242536 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" podUID="9cdc475f-0036-4e63-8fd4-c1e44537668d" Feb 19 19:38:08 crc kubenswrapper[4787]: E0219 19:38:08.781462 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" podUID="9cdc475f-0036-4e63-8fd4-c1e44537668d" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.123676 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.124002 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dglv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-5k7fl_openstack-operators(68b08cc9-812d-4199-8654-9a5a3f2a855f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.125310 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" podUID="68b08cc9-812d-4199-8654-9a5a3f2a855f" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.764499 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.764934 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbgks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-bbnhv_openstack-operators(a752c75e-1e1e-4d78-b82a-95f8df84523f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.766033 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" podUID="a752c75e-1e1e-4d78-b82a-95f8df84523f" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.791883 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" podUID="68b08cc9-812d-4199-8654-9a5a3f2a855f" Feb 19 19:38:10 crc kubenswrapper[4787]: E0219 19:38:10.792150 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" podUID="a752c75e-1e1e-4d78-b82a-95f8df84523f" Feb 19 19:38:12 crc kubenswrapper[4787]: E0219 19:38:12.436248 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 19 19:38:12 crc kubenswrapper[4787]: E0219 19:38:12.436474 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2s4tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-j2ktf_openstack-operators(880dd943-ce91-4373-ab8a-fd5df0a44e2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:12 crc kubenswrapper[4787]: E0219 19:38:12.437669 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" podUID="880dd943-ce91-4373-ab8a-fd5df0a44e2a" Feb 19 19:38:12 crc kubenswrapper[4787]: E0219 19:38:12.806648 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" podUID="880dd943-ce91-4373-ab8a-fd5df0a44e2a" Feb 19 19:38:14 crc kubenswrapper[4787]: E0219 19:38:14.949818 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 19 19:38:14 crc kubenswrapper[4787]: E0219 19:38:14.950756 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhlvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-tlx7r_openstack-operators(f58c3336-8153-4c54-95c6-2cf2f23cbe57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:14 crc kubenswrapper[4787]: E0219 19:38:14.952925 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" podUID="f58c3336-8153-4c54-95c6-2cf2f23cbe57" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.304042 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.304197 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d9tqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-9p6x4_openstack-operators(00ef1a7b-bf28-4126-b60f-c79af3fde4da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.305370 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" podUID="00ef1a7b-bf28-4126-b60f-c79af3fde4da" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.533255 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.533456 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jv87f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-rv4sk_openstack-operators(c0ee76ae-6d9e-4470-8f77-27d7d231bb7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.534808 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" podUID="c0ee76ae-6d9e-4470-8f77-27d7d231bb7d" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.829740 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" podUID="00ef1a7b-bf28-4126-b60f-c79af3fde4da" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.829743 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" podUID="c0ee76ae-6d9e-4470-8f77-27d7d231bb7d" Feb 19 19:38:15 crc kubenswrapper[4787]: E0219 19:38:15.829757 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" podUID="f58c3336-8153-4c54-95c6-2cf2f23cbe57" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.261216 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.261384 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9vkc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-88xzz_openstack-operators(26a6b075-ab07-4508-86f7-2af4934e078a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.262645 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" podUID="26a6b075-ab07-4508-86f7-2af4934e078a" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.742988 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.743159 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5wn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-phlkl_openstack-operators(250d2041-efa0-41fb-8b0e-e99ba8c1c14c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.744587 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" podUID="250d2041-efa0-41fb-8b0e-e99ba8c1c14c" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.838002 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" podUID="26a6b075-ab07-4508-86f7-2af4934e078a" Feb 19 19:38:16 crc kubenswrapper[4787]: E0219 19:38:16.838048 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" podUID="250d2041-efa0-41fb-8b0e-e99ba8c1c14c" Feb 19 19:38:18 crc kubenswrapper[4787]: E0219 19:38:18.880269 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 19:38:18 crc kubenswrapper[4787]: E0219 19:38:18.880697 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hhgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-66hj2_openstack-operators(4a47b4c3-d7f4-4194-bd9c-fdef06d3450d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:38:18 crc kubenswrapper[4787]: E0219 19:38:18.882068 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" podUID="4a47b4c3-d7f4-4194-bd9c-fdef06d3450d" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.428415 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-j59h4"] Feb 19 19:38:19 crc kubenswrapper[4787]: W0219 19:38:19.440838 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06aa3b20_a2ee_4c2b_bda6_0e876910a26c.slice/crio-c0decf74ac65e6c98dd272e2b285f768b476eb64ed59af10474a47fcc04cd126 WatchSource:0}: Error finding container c0decf74ac65e6c98dd272e2b285f768b476eb64ed59af10474a47fcc04cd126: Status 404 returned error can't find the container with id c0decf74ac65e6c98dd272e2b285f768b476eb64ed59af10474a47fcc04cd126 Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.543619 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh"] Feb 19 19:38:19 crc kubenswrapper[4787]: W0219 19:38:19.561649 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36c1d908_879d_4d98_bd71_06b5c6e802e8.slice/crio-6ab1c408e18a27d87fbff04d475a643f2edc472ca9463752ac82c87934af80d4 WatchSource:0}: Error finding container 6ab1c408e18a27d87fbff04d475a643f2edc472ca9463752ac82c87934af80d4: Status 404 returned error can't find the container with id 6ab1c408e18a27d87fbff04d475a643f2edc472ca9463752ac82c87934af80d4 Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.561830 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc"] Feb 19 19:38:19 crc kubenswrapper[4787]: W0219 19:38:19.573006 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78a45da3_619d_4cc4_a819_6dad66a61737.slice/crio-9cbe8155aec927383505f31fb3239a9cc7f22a922c7661d1b8c36f1f3dcd1eaa WatchSource:0}: Error finding container 9cbe8155aec927383505f31fb3239a9cc7f22a922c7661d1b8c36f1f3dcd1eaa: Status 404 returned error can't find the container with id 9cbe8155aec927383505f31fb3239a9cc7f22a922c7661d1b8c36f1f3dcd1eaa Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.857182 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" event={"ID":"b7deddaa-9e2a-4e95-8dce-fb6b70a0523e","Type":"ContainerStarted","Data":"5aba76b675f81f16037a7344f3dfdfac50f4c4d75b65c66d07ac59ce532cffb7"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.857475 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.858904 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" event={"ID":"36c1d908-879d-4d98-bd71-06b5c6e802e8","Type":"ContainerStarted","Data":"8cd9ecbec9b4936919cbb85801fbfafbff9e6147f9d2f59713cdf267a7f1ba58"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.858938 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" event={"ID":"36c1d908-879d-4d98-bd71-06b5c6e802e8","Type":"ContainerStarted","Data":"6ab1c408e18a27d87fbff04d475a643f2edc472ca9463752ac82c87934af80d4"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.859060 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.860114 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" event={"ID":"06aa3b20-a2ee-4c2b-bda6-0e876910a26c","Type":"ContainerStarted","Data":"c0decf74ac65e6c98dd272e2b285f768b476eb64ed59af10474a47fcc04cd126"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.862212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" event={"ID":"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4","Type":"ContainerStarted","Data":"fd01798008a5a1b57b4d4e9e87558807cb3fb96e27b4e1013ea47642c1a67bbc"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.862266 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.886977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" event={"ID":"78a45da3-619d-4cc4-a819-6dad66a61737","Type":"ContainerStarted","Data":"9cbe8155aec927383505f31fb3239a9cc7f22a922c7661d1b8c36f1f3dcd1eaa"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.892929 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" event={"ID":"46df12dd-6fd4-4508-8141-ef1cc6551d79","Type":"ContainerStarted","Data":"dd9dce45700eefba0df994264075c6a68990ccd0daf840432e73b8ce497777bf"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.893170 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.898316 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" event={"ID":"0fdbbc7b-81f4-401b-8df0-59417ab3ec18","Type":"ContainerStarted","Data":"19b1e79e330dcb597722585a9292953b168b0bed612199382bf4292fda97b8ec"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.899093 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.904038 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" event={"ID":"39e4daf9-e2ed-4325-9f5f-27b2b5662945","Type":"ContainerStarted","Data":"74cc7329011f6397f012453e307f2833d0baece700f627e9a2fe7b2b9e52cbbd"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.904391 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.909097 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" event={"ID":"0944c0f9-ef54-46cc-be37-a59477312705","Type":"ContainerStarted","Data":"28ae7749042adad40aadabb58f63c7615de1d59d26aad161c8b727d0cb1e6696"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.909315 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.911369 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" event={"ID":"90a8e9a7-b3db-4b64-bde8-569c3e8485d5","Type":"ContainerStarted","Data":"01f6d7e23e44a8e76978a411be53e5db983f7b8e083715d5638b4a31b894a4b1"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.911696 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.916086 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" event={"ID":"6e92b566-c5a6-40e8-be75-5de416385888","Type":"ContainerStarted","Data":"ca757893b96d7c2185db409334a734d6d78ff85fd53d470a37a482a076bda373"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.916473 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.934228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" event={"ID":"bfb600b1-766e-4df0-9f20-a5b4ad0ed684","Type":"ContainerStarted","Data":"f00e1d986ae6e2262e8718fc113ac6beca373bf3fae982003f512ec94258bfd2"} Feb 19 19:38:19 crc kubenswrapper[4787]: I0219 19:38:19.934346 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:38:19 crc kubenswrapper[4787]: E0219 19:38:19.936715 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" podUID="4a47b4c3-d7f4-4194-bd9c-fdef06d3450d" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.115631 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" podStartSLOduration=4.81637521 podStartE2EDuration="33.115598708s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:48.962973583 +0000 UTC m=+1136.753639525" lastFinishedPulling="2026-02-19 19:38:17.262197081 +0000 UTC m=+1165.052863023" observedRunningTime="2026-02-19 19:38:20.098794154 +0000 UTC m=+1167.889460096" watchObservedRunningTime="2026-02-19 19:38:20.115598708 +0000 UTC m=+1167.906264650" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.201386 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" podStartSLOduration=5.440087413 podStartE2EDuration="33.201373195s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:48.964678112 +0000 UTC m=+1136.755344044" lastFinishedPulling="2026-02-19 19:38:16.725963884 +0000 UTC m=+1164.516629826" observedRunningTime="2026-02-19 19:38:20.199283255 +0000 UTC m=+1167.989949207" watchObservedRunningTime="2026-02-19 19:38:20.201373195 +0000 UTC m=+1167.992039137" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.362519 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podStartSLOduration=4.903896733 podStartE2EDuration="33.3624991s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.520386616 +0000 UTC m=+1138.311052558" lastFinishedPulling="2026-02-19 19:38:18.978988983 +0000 UTC m=+1166.769654925" observedRunningTime="2026-02-19 19:38:20.28427386 +0000 UTC m=+1168.074939812" watchObservedRunningTime="2026-02-19 19:38:20.3624991 +0000 UTC m=+1168.153165042" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.364417 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" podStartSLOduration=5.618425523 podStartE2EDuration="33.364407775s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:48.978931092 +0000 UTC m=+1136.769597034" lastFinishedPulling="2026-02-19 19:38:16.724913344 +0000 UTC m=+1164.515579286" observedRunningTime="2026-02-19 19:38:20.349111015 +0000 UTC m=+1168.139776957" watchObservedRunningTime="2026-02-19 19:38:20.364407775 +0000 UTC m=+1168.155073717" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.407655 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" podStartSLOduration=5.552980866 podStartE2EDuration="32.407640369s" podCreationTimestamp="2026-02-19 19:37:48 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.407543458 +0000 UTC m=+1138.198209400" lastFinishedPulling="2026-02-19 19:38:17.262202961 +0000 UTC m=+1165.052868903" observedRunningTime="2026-02-19 19:38:20.401505502 +0000 UTC m=+1168.192171444" watchObservedRunningTime="2026-02-19 19:38:20.407640369 +0000 UTC m=+1168.198306311" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.520416 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podStartSLOduration=5.065878944 podStartE2EDuration="33.520396233s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.52085739 +0000 UTC m=+1138.311523332" lastFinishedPulling="2026-02-19 19:38:18.975374679 +0000 UTC m=+1166.766040621" observedRunningTime="2026-02-19 19:38:20.48797079 +0000 UTC m=+1168.278636732" watchObservedRunningTime="2026-02-19 19:38:20.520396233 +0000 UTC m=+1168.311062185" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.563765 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" podStartSLOduration=4.721283431 podStartE2EDuration="32.56374314s" podCreationTimestamp="2026-02-19 19:37:48 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.49134226 +0000 UTC m=+1138.282008202" lastFinishedPulling="2026-02-19 19:38:18.333801969 +0000 UTC m=+1166.124467911" observedRunningTime="2026-02-19 19:38:20.534095287 +0000 UTC m=+1168.324761229" watchObservedRunningTime="2026-02-19 19:38:20.56374314 +0000 UTC m=+1168.354409082" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.597737 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" podStartSLOduration=5.083780198 podStartE2EDuration="33.597719087s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.517173194 +0000 UTC m=+1138.307839136" lastFinishedPulling="2026-02-19 19:38:19.031112083 +0000 UTC m=+1166.821778025" observedRunningTime="2026-02-19 19:38:20.566402396 +0000 UTC m=+1168.357068338" watchObservedRunningTime="2026-02-19 19:38:20.597719087 +0000 UTC m=+1168.388385029" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.621141 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" podStartSLOduration=32.62112361 podStartE2EDuration="32.62112361s" podCreationTimestamp="2026-02-19 19:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:38:20.614416047 +0000 UTC m=+1168.405081989" watchObservedRunningTime="2026-02-19 19:38:20.62112361 +0000 UTC m=+1168.411789552" Feb 19 19:38:20 crc kubenswrapper[4787]: I0219 19:38:20.649075 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" podStartSLOduration=5.056573803 podStartE2EDuration="33.649056304s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.741299388 +0000 UTC m=+1137.531965330" lastFinishedPulling="2026-02-19 19:38:18.333781889 +0000 UTC m=+1166.124447831" observedRunningTime="2026-02-19 19:38:20.633913518 +0000 UTC m=+1168.424579480" watchObservedRunningTime="2026-02-19 19:38:20.649056304 +0000 UTC m=+1168.439722246" Feb 19 19:38:24 crc kubenswrapper[4787]: I0219 19:38:24.631926 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.025763 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" event={"ID":"9cdc475f-0036-4e63-8fd4-c1e44537668d","Type":"ContainerStarted","Data":"7aac84434deafa4cf9cdd72fad88200548166374bfc0361e5734046f8762dfca"} Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.027072 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.034254 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" event={"ID":"285a6f28-aeac-4b0d-816a-2eb05abe7ef3","Type":"ContainerStarted","Data":"611160f07270b4a54b8c6dc2722cb26f80eb319e40d158c6239dd87dd1411c8a"} Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.034743 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.046058 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" event={"ID":"a752c75e-1e1e-4d78-b82a-95f8df84523f","Type":"ContainerStarted","Data":"e3a513231a8635ef72ad38c296d16c6f1e4ba5f8506868af5bc2edfb626488cd"} Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.046840 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.056007 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" podStartSLOduration=3.635424242 podStartE2EDuration="40.055992147s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.761352795 +0000 UTC m=+1137.552018737" lastFinishedPulling="2026-02-19 19:38:26.1819207 +0000 UTC m=+1173.972586642" observedRunningTime="2026-02-19 19:38:27.055844483 +0000 UTC m=+1174.846510425" watchObservedRunningTime="2026-02-19 19:38:27.055992147 +0000 UTC m=+1174.846658079" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.060859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" event={"ID":"68b08cc9-812d-4199-8654-9a5a3f2a855f","Type":"ContainerStarted","Data":"a622eeff600084205f28783a43c64b9aac1e1900d829616616aa485adae80280"} Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.061570 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.080112 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" event={"ID":"06aa3b20-a2ee-4c2b-bda6-0e876910a26c","Type":"ContainerStarted","Data":"de0b822981c16006c7e4059af2488b114ff7097ae0d90c9b42efcd9f9ec17a12"} Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.080916 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.111528 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" podStartSLOduration=3.678767469 podStartE2EDuration="40.111513757s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.742555374 +0000 UTC m=+1137.533221316" lastFinishedPulling="2026-02-19 19:38:26.175301662 +0000 UTC m=+1173.965967604" observedRunningTime="2026-02-19 19:38:27.087194754 +0000 UTC m=+1174.877860696" watchObservedRunningTime="2026-02-19 19:38:27.111513757 +0000 UTC m=+1174.902179699" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.141770 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" podStartSLOduration=3.704461064 podStartE2EDuration="40.141751298s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.745738626 +0000 UTC m=+1137.536404568" lastFinishedPulling="2026-02-19 19:38:26.18302886 +0000 UTC m=+1173.973694802" observedRunningTime="2026-02-19 19:38:27.113069089 +0000 UTC m=+1174.903735031" watchObservedRunningTime="2026-02-19 19:38:27.141751298 +0000 UTC m=+1174.932417240" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.146308 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" podStartSLOduration=33.373172759 podStartE2EDuration="40.14628812s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:38:19.446682416 +0000 UTC m=+1167.237348358" lastFinishedPulling="2026-02-19 19:38:26.219797777 +0000 UTC m=+1174.010463719" observedRunningTime="2026-02-19 19:38:27.140730461 +0000 UTC m=+1174.931396403" watchObservedRunningTime="2026-02-19 19:38:27.14628812 +0000 UTC m=+1174.936954062" Feb 19 19:38:27 crc kubenswrapper[4787]: I0219 19:38:27.184363 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" podStartSLOduration=3.452852627 podStartE2EDuration="40.184346211s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.507376394 +0000 UTC m=+1137.298042336" lastFinishedPulling="2026-02-19 19:38:26.238869978 +0000 UTC m=+1174.029535920" observedRunningTime="2026-02-19 19:38:27.183392406 +0000 UTC m=+1174.974058348" watchObservedRunningTime="2026-02-19 19:38:27.184346211 +0000 UTC m=+1174.975012153" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.005155 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.013498 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.059186 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.501350 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.526221 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.727501 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.750618 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58b878c868-9zgl9" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.807511 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 19:38:28 crc kubenswrapper[4787]: I0219 19:38:28.864170 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.095651 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" event={"ID":"78a45da3-619d-4cc4-a819-6dad66a61737","Type":"ContainerStarted","Data":"d3332aeb4216ec66dca10c3867a0caf465a6bd6db1b2e60f855a48fa407eefa6"} Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.095807 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.096753 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" event={"ID":"00ef1a7b-bf28-4126-b60f-c79af3fde4da","Type":"ContainerStarted","Data":"a1c1a285f7eef5ea944ba4cdf8e537e91c8a81ef864bf120e03e2e0788f056fc"} Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.096906 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.098323 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" event={"ID":"880dd943-ce91-4373-ab8a-fd5df0a44e2a","Type":"ContainerStarted","Data":"075f330daa49d44db4f7bafac1ae8166acd35567e1a2ca9ad4135105f5c8dcd7"} Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.098527 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.124920 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" podStartSLOduration=33.173922399 podStartE2EDuration="42.124903868s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:38:19.583599638 +0000 UTC m=+1167.374265570" lastFinishedPulling="2026-02-19 19:38:28.534581097 +0000 UTC m=+1176.325247039" observedRunningTime="2026-02-19 19:38:29.122444972 +0000 UTC m=+1176.913110914" watchObservedRunningTime="2026-02-19 19:38:29.124903868 +0000 UTC m=+1176.915569810" Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.163359 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" podStartSLOduration=4.011870139 podStartE2EDuration="42.163142514s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.384107483 +0000 UTC m=+1138.174773425" lastFinishedPulling="2026-02-19 19:38:28.535379858 +0000 UTC m=+1176.326045800" observedRunningTime="2026-02-19 19:38:29.157497172 +0000 UTC m=+1176.948163114" watchObservedRunningTime="2026-02-19 19:38:29.163142514 +0000 UTC m=+1176.953808446" Feb 19 19:38:29 crc kubenswrapper[4787]: I0219 19:38:29.180882 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" podStartSLOduration=3.394984638 podStartE2EDuration="42.18086467s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.750214365 +0000 UTC m=+1137.540880307" lastFinishedPulling="2026-02-19 19:38:28.536094397 +0000 UTC m=+1176.326760339" observedRunningTime="2026-02-19 19:38:29.177378356 +0000 UTC m=+1176.968044298" watchObservedRunningTime="2026-02-19 19:38:29.18086467 +0000 UTC m=+1176.971530612" Feb 19 19:38:30 crc kubenswrapper[4787]: I0219 19:38:30.107749 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" event={"ID":"26a6b075-ab07-4508-86f7-2af4934e078a","Type":"ContainerStarted","Data":"cd5d6deadf1820db4ff17db918a4b10acb56ebf37cf463634d6ae5c6b85582e6"} Feb 19 19:38:30 crc kubenswrapper[4787]: I0219 19:38:30.108220 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:38:30 crc kubenswrapper[4787]: I0219 19:38:30.127556 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" podStartSLOduration=4.212613327 podStartE2EDuration="43.127540945s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.502197942 +0000 UTC m=+1138.292863884" lastFinishedPulling="2026-02-19 19:38:29.41712555 +0000 UTC m=+1177.207791502" observedRunningTime="2026-02-19 19:38:30.120703591 +0000 UTC m=+1177.911369533" watchObservedRunningTime="2026-02-19 19:38:30.127540945 +0000 UTC m=+1177.918206887" Feb 19 19:38:31 crc kubenswrapper[4787]: I0219 19:38:31.116697 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" event={"ID":"f58c3336-8153-4c54-95c6-2cf2f23cbe57","Type":"ContainerStarted","Data":"233c1bda6a74e6cbb68bea6a7b322634f84d13f725ba52212903ab35c036d215"} Feb 19 19:38:31 crc kubenswrapper[4787]: I0219 19:38:31.117200 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:38:31 crc kubenswrapper[4787]: I0219 19:38:31.132498 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" podStartSLOduration=3.191879805 podStartE2EDuration="44.132471133s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.497994324 +0000 UTC m=+1137.288660266" lastFinishedPulling="2026-02-19 19:38:30.438585652 +0000 UTC m=+1178.229251594" observedRunningTime="2026-02-19 19:38:31.131917008 +0000 UTC m=+1178.922582950" watchObservedRunningTime="2026-02-19 19:38:31.132471133 +0000 UTC m=+1178.923137075" Feb 19 19:38:32 crc kubenswrapper[4787]: I0219 19:38:32.125445 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" event={"ID":"250d2041-efa0-41fb-8b0e-e99ba8c1c14c","Type":"ContainerStarted","Data":"0d4fb6fc527cd282955874d4d318336af5fb2885268d8299ccb8fcb5cfc5be9a"} Feb 19 19:38:32 crc kubenswrapper[4787]: I0219 19:38:32.126809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" event={"ID":"c0ee76ae-6d9e-4470-8f77-27d7d231bb7d","Type":"ContainerStarted","Data":"cfda97eb44588fcd6d0a0f234a12267516d399cc08cd9c8c4ba5c8cb5fc874ff"} Feb 19 19:38:32 crc kubenswrapper[4787]: I0219 19:38:32.127041 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:38:32 crc kubenswrapper[4787]: I0219 19:38:32.144087 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-phlkl" podStartSLOduration=3.227116573 podStartE2EDuration="44.144068271s" podCreationTimestamp="2026-02-19 19:37:48 +0000 UTC" firstStartedPulling="2026-02-19 19:37:50.387364807 +0000 UTC m=+1138.178030749" lastFinishedPulling="2026-02-19 19:38:31.304316505 +0000 UTC m=+1179.094982447" observedRunningTime="2026-02-19 19:38:32.137195136 +0000 UTC m=+1179.927861088" watchObservedRunningTime="2026-02-19 19:38:32.144068271 +0000 UTC m=+1179.934734213" Feb 19 19:38:32 crc kubenswrapper[4787]: I0219 19:38:32.161120 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" podStartSLOduration=3.095002548 podStartE2EDuration="45.161099428s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.293774246 +0000 UTC m=+1137.084440188" lastFinishedPulling="2026-02-19 19:38:31.359871126 +0000 UTC m=+1179.150537068" observedRunningTime="2026-02-19 19:38:32.157715357 +0000 UTC m=+1179.948381299" watchObservedRunningTime="2026-02-19 19:38:32.161099428 +0000 UTC m=+1179.951765370" Feb 19 19:38:33 crc kubenswrapper[4787]: I0219 19:38:33.915271 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" Feb 19 19:38:34 crc kubenswrapper[4787]: I0219 19:38:34.274784 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 19:38:35 crc kubenswrapper[4787]: I0219 19:38:35.150787 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" event={"ID":"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d","Type":"ContainerStarted","Data":"b0c6b111d4ef249656af06dfe08162919cd4616bf432aee952fa282a8d5564cd"} Feb 19 19:38:35 crc kubenswrapper[4787]: I0219 19:38:35.151544 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:38:35 crc kubenswrapper[4787]: I0219 19:38:35.173392 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" podStartSLOduration=3.047031404 podStartE2EDuration="48.173373605s" podCreationTimestamp="2026-02-19 19:37:47 +0000 UTC" firstStartedPulling="2026-02-19 19:37:49.310723104 +0000 UTC m=+1137.101389046" lastFinishedPulling="2026-02-19 19:38:34.437065295 +0000 UTC m=+1182.227731247" observedRunningTime="2026-02-19 19:38:35.170184639 +0000 UTC m=+1182.960850591" watchObservedRunningTime="2026-02-19 19:38:35.173373605 +0000 UTC m=+1182.964039547" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.089977 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.212131 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.271969 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.288841 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.405662 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.491570 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.545422 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.599925 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 19:38:38 crc kubenswrapper[4787]: I0219 19:38:38.621065 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 19:38:48 crc kubenswrapper[4787]: I0219 19:38:48.330461 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.545159 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rmj5w"] Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.547301 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:04 crc kubenswrapper[4787]: W0219 19:39:04.549709 4787 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 19:39:04 crc kubenswrapper[4787]: E0219 19:39:04.549753 4787 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:39:04 crc kubenswrapper[4787]: W0219 19:39:04.549805 4787 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 19:39:04 crc kubenswrapper[4787]: E0219 19:39:04.549820 4787 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:39:04 crc kubenswrapper[4787]: W0219 19:39:04.550250 4787 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: configmaps "dns" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 19:39:04 crc kubenswrapper[4787]: E0219 19:39:04.550275 4787 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"dns\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:39:04 crc kubenswrapper[4787]: W0219 19:39:04.550336 4787 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-z8l6f": failed to list *v1.Secret: secrets "dnsmasq-dns-dockercfg-z8l6f" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 19 19:39:04 crc kubenswrapper[4787]: E0219 19:39:04.550352 4787 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-z8l6f\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"dnsmasq-dns-dockercfg-z8l6f\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.567645 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rmj5w"] Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.633710 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vlx8z"] Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.635841 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.645834 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.669884 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vlx8z"] Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.717590 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-config\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.717729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xctbj\" (UniqueName: \"kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.819827 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-config\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.819944 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xctbj\" (UniqueName: \"kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.819973 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrn6\" (UniqueName: \"kubernetes.io/projected/120d3242-bcef-48c2-be1c-a82698775f07-kube-api-access-fxrn6\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.819994 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.820046 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-config\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.921820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrn6\" (UniqueName: \"kubernetes.io/projected/120d3242-bcef-48c2-be1c-a82698775f07-kube-api-access-fxrn6\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.921869 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.921958 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-config\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:04 crc kubenswrapper[4787]: I0219 19:39:04.922981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:05 crc kubenswrapper[4787]: I0219 19:39:05.400193 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 19:39:05 crc kubenswrapper[4787]: I0219 19:39:05.722275 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 19:39:05 crc kubenswrapper[4787]: I0219 19:39:05.731604 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-config\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:05 crc kubenswrapper[4787]: I0219 19:39:05.733909 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-config\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:05 crc kubenswrapper[4787]: E0219 19:39:05.836766 4787 projected.go:288] Couldn't get configMap openstack/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 19:39:05 crc kubenswrapper[4787]: E0219 19:39:05.836827 4787 projected.go:194] Error preparing data for projected volume kube-api-access-xctbj for pod openstack/dnsmasq-dns-675f4bcbfc-rmj5w: failed to sync configmap cache: timed out waiting for the condition Feb 19 19:39:05 crc kubenswrapper[4787]: E0219 19:39:05.836923 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj podName:978a88dc-a643-4bc9-b46e-8c8e78f7b4cc nodeName:}" failed. No retries permitted until 2026-02-19 19:39:06.336880859 +0000 UTC m=+1214.127546811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xctbj" (UniqueName: "kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj") pod "dnsmasq-dns-675f4bcbfc-rmj5w" (UID: "978a88dc-a643-4bc9-b46e-8c8e78f7b4cc") : failed to sync configmap cache: timed out waiting for the condition Feb 19 19:39:05 crc kubenswrapper[4787]: I0219 19:39:05.911691 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 19:39:05 crc kubenswrapper[4787]: I0219 19:39:05.920429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrn6\" (UniqueName: \"kubernetes.io/projected/120d3242-bcef-48c2-be1c-a82698775f07-kube-api-access-fxrn6\") pod \"dnsmasq-dns-78dd6ddcc-vlx8z\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:06 crc kubenswrapper[4787]: I0219 19:39:06.044469 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-z8l6f" Feb 19 19:39:06 crc kubenswrapper[4787]: I0219 19:39:06.155076 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:06 crc kubenswrapper[4787]: I0219 19:39:06.358926 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xctbj\" (UniqueName: \"kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:06 crc kubenswrapper[4787]: I0219 19:39:06.385360 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xctbj\" (UniqueName: \"kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj\") pod \"dnsmasq-dns-675f4bcbfc-rmj5w\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:06 crc kubenswrapper[4787]: I0219 19:39:06.615640 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vlx8z"] Feb 19 19:39:06 crc kubenswrapper[4787]: I0219 19:39:06.668381 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.120870 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rmj5w"] Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.436385 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rmj5w"] Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.444998 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" event={"ID":"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc","Type":"ContainerStarted","Data":"201a51f4c0fd91d7e036740b7b3dbbb99c347a8deaa607c577710722df295af5"} Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.450139 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" event={"ID":"120d3242-bcef-48c2-be1c-a82698775f07","Type":"ContainerStarted","Data":"2b2d05425c3f328517eb714c43348bcf6c50fd2b0af85114b6889062de0fbf3a"} Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.462664 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-x2n4w"] Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.464859 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.471163 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-x2n4w"] Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.580724 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-dns-svc\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.580814 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-config\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.580847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5sj\" (UniqueName: \"kubernetes.io/projected/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-kube-api-access-bt5sj\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.683072 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-dns-svc\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.683221 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-config\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.683247 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5sj\" (UniqueName: \"kubernetes.io/projected/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-kube-api-access-bt5sj\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.684530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-dns-svc\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.685376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-config\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.775221 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5sj\" (UniqueName: \"kubernetes.io/projected/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-kube-api-access-bt5sj\") pod \"dnsmasq-dns-666b6646f7-x2n4w\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.809139 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:07 crc kubenswrapper[4787]: I0219 19:39:07.950361 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vlx8z"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.012798 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s7pld"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.014247 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.064190 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s7pld"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.094806 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9ks\" (UniqueName: \"kubernetes.io/projected/f7fd439a-6768-4d1c-8317-4d8cd31301bb-kube-api-access-zc9ks\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.095152 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-config\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.095178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.196532 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9ks\" (UniqueName: \"kubernetes.io/projected/f7fd439a-6768-4d1c-8317-4d8cd31301bb-kube-api-access-zc9ks\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.196643 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-config\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.196668 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.197488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.197922 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-config\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.217544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9ks\" (UniqueName: \"kubernetes.io/projected/f7fd439a-6768-4d1c-8317-4d8cd31301bb-kube-api-access-zc9ks\") pod \"dnsmasq-dns-57d769cc4f-s7pld\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.342411 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.534429 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-x2n4w"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.584737 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.586331 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.603947 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.604065 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hdjcc" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.604183 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.604381 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.604546 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.604587 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.604790 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.629839 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.659668 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.661284 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.678412 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.679959 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713482 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96zn\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-kube-api-access-h96zn\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713583 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713640 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713671 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80458aec-a844-4f4d-b618-56bdc811cd43-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713729 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713787 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713818 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80458aec-a844-4f4d-b618-56bdc811cd43-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-config-data\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713886 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.713950 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.714023 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.725353 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.761831 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815515 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815584 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-server-conf\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815645 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815688 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-server-conf\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815733 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815755 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815938 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-config-data\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.815963 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/278d26c1-8a7c-4278-b84c-0c0c24d81f52-pod-info\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816055 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h96zn\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-kube-api-access-h96zn\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816086 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816146 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80458aec-a844-4f4d-b618-56bdc811cd43-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816196 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816221 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816234 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d554l\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-kube-api-access-d554l\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816281 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14da78cc-cd10-440d-9983-6e80d45f3e31-pod-info\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816301 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816342 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gbz\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-kube-api-access-v6gbz\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816365 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80458aec-a844-4f4d-b618-56bdc811cd43-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-config-data\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816418 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816446 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/278d26c1-8a7c-4278-b84c-0c0c24d81f52-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816464 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-config-data\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.816492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14da78cc-cd10-440d-9983-6e80d45f3e31-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.817006 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.818824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-config-data\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.819239 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.819427 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.821750 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.822441 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.822567 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8d6ea76abc031ad8be5f46b0b6e594bd6a7a032377b43b3a75a511904a039abd/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.823102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.823185 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.823645 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80458aec-a844-4f4d-b618-56bdc811cd43-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.823666 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80458aec-a844-4f4d-b618-56bdc811cd43-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.840155 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h96zn\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-kube-api-access-h96zn\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.893665 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.917946 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14da78cc-cd10-440d-9983-6e80d45f3e31-pod-info\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.918048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gbz\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-kube-api-access-v6gbz\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.918118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.918138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/278d26c1-8a7c-4278-b84c-0c0c24d81f52-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-config-data\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921522 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14da78cc-cd10-440d-9983-6e80d45f3e31-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921558 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-server-conf\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921666 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-server-conf\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921700 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921722 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921802 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-config-data\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921823 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/278d26c1-8a7c-4278-b84c-0c0c24d81f52-pod-info\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921875 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921921 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921948 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.921993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.922023 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.922056 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.922069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.922093 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d554l\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-kube-api-access-d554l\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.922178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.922714 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-config-data\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.923394 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.923830 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.924284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-server-conf\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.928087 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-server-conf\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.928653 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-config-data\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.929124 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.929861 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.930036 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14da78cc-cd10-440d-9983-6e80d45f3e31-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.930769 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.932414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.935331 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14da78cc-cd10-440d-9983-6e80d45f3e31-pod-info\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.936384 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.938507 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.938531 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/278d26c1-8a7c-4278-b84c-0c0c24d81f52-pod-info\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.938982 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.939004 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.939018 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f93760bfdbe999d7fc2ff672e7708eaae49a08043ada3581238c277596fdb17c/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.939027 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/074ae6b187beeece20d2cfb5ff5c72683c1851611c6a5bac612c514c4d6bbc9e/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.941323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gbz\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-kube-api-access-v6gbz\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.945842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.956956 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.958516 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d554l\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-kube-api-access-d554l\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:08 crc kubenswrapper[4787]: I0219 19:39:08.959333 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/278d26c1-8a7c-4278-b84c-0c0c24d81f52-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.025596 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " pod="openstack/rabbitmq-server-1" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.031050 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " pod="openstack/rabbitmq-server-2" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.078958 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.104349 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.112785 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.112952 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.116064 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.116161 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.116733 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ff257" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.116743 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.117828 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.125884 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.128571 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.171837 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s7pld"] Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/769a015d-4883-474b-a4e8-45a2b77f2412-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236256 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236287 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/769a015d-4883-474b-a4e8-45a2b77f2412-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236312 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt65m\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-kube-api-access-kt65m\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236337 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.236411 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.237074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.237108 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.237138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.237163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.300066 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339283 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/769a015d-4883-474b-a4e8-45a2b77f2412-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339322 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt65m\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-kube-api-access-kt65m\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339345 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339422 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339456 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339475 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339499 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339521 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339566 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339689 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/769a015d-4883-474b-a4e8-45a2b77f2412-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.339712 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.340541 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.340588 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.341444 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.341897 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.343909 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.356077 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.370074 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt65m\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-kube-api-access-kt65m\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.371211 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/769a015d-4883-474b-a4e8-45a2b77f2412-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.372310 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/769a015d-4883-474b-a4e8-45a2b77f2412-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.375076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.378655 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.378693 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/559af49a0292dcec9d08b73ac569dce92007fe855ff53d32c237c0ba151f0ca4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.423489 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.441926 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.549443 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" event={"ID":"39fb25a6-b1cf-4d60-b1eb-9ec62b143166","Type":"ContainerStarted","Data":"a829855d755388dfd58eeb9b991905affa7506b9434f606a76fc1804013f9938"} Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.553375 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" event={"ID":"f7fd439a-6768-4d1c-8317-4d8cd31301bb","Type":"ContainerStarted","Data":"1d9d1bb6c88086e3649b60baa3c70a10e0dd83a36add9a3b65e852e3774b8b6e"} Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.696661 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:39:09 crc kubenswrapper[4787]: I0219 19:39:09.954696 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.040689 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.078981 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.081221 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.084467 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.084693 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x65sx" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.085047 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.086458 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.090038 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.114388 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.163636 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165413 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165455 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfcfl\" (UniqueName: \"kubernetes.io/projected/5848c368-e71c-439d-bfca-f241813f9136-kube-api-access-rfcfl\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165521 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-kolla-config\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848c368-e71c-439d-bfca-f241813f9136-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5848c368-e71c-439d-bfca-f241813f9136-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-config-data-default\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165690 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848c368-e71c-439d-bfca-f241813f9136-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.165730 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268026 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfcfl\" (UniqueName: \"kubernetes.io/projected/5848c368-e71c-439d-bfca-f241813f9136-kube-api-access-rfcfl\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268438 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-kolla-config\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268572 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848c368-e71c-439d-bfca-f241813f9136-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268641 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5848c368-e71c-439d-bfca-f241813f9136-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268689 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-config-data-default\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268722 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848c368-e71c-439d-bfca-f241813f9136-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.268784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.269323 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5848c368-e71c-439d-bfca-f241813f9136-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.269723 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-kolla-config\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.270269 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-config-data-default\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.270764 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5848c368-e71c-439d-bfca-f241813f9136-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.272940 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.272980 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12f278b69a9f9e010ed11857ff2873844847e6d7edb4a320fb21a1fe7b0fec70/globalmount\"" pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.281637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5848c368-e71c-439d-bfca-f241813f9136-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.283001 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5848c368-e71c-439d-bfca-f241813f9136-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.294253 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfcfl\" (UniqueName: \"kubernetes.io/projected/5848c368-e71c-439d-bfca-f241813f9136-kube-api-access-rfcfl\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.355055 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9fad60c2-6b88-4573-9458-824ba902e7fb\") pod \"openstack-galera-0\" (UID: \"5848c368-e71c-439d-bfca-f241813f9136\") " pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.406171 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.686679 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80458aec-a844-4f4d-b618-56bdc811cd43","Type":"ContainerStarted","Data":"ea204586da409cf37d9664ca91fd23dea40bbd80fdf91a3c284cd713802920b5"} Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.703865 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"769a015d-4883-474b-a4e8-45a2b77f2412","Type":"ContainerStarted","Data":"07e62df1b8434db679bd9ec8727b86c0ce349aee9c68debcda4cad54b2cfa938"} Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.734575 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14da78cc-cd10-440d-9983-6e80d45f3e31","Type":"ContainerStarted","Data":"4de50483fd2fab0a7fe80161a0b54c5cbd81f51b87aa1eb5e4fa06b45f45ec78"} Feb 19 19:39:10 crc kubenswrapper[4787]: I0219 19:39:10.779800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"278d26c1-8a7c-4278-b84c-0c0c24d81f52","Type":"ContainerStarted","Data":"8ae3fda264dc085e1b0a5b18a2be762d7d4c5dbc7e2fe29dffa5117d50f7fa32"} Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.230781 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 19:39:11 crc kubenswrapper[4787]: W0219 19:39:11.270635 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5848c368_e71c_439d_bfca_f241813f9136.slice/crio-05fa6f616f55f8d212dbcd94e554f22e5b9592d9b30c25cc900192bc1d7267d2 WatchSource:0}: Error finding container 05fa6f616f55f8d212dbcd94e554f22e5b9592d9b30c25cc900192bc1d7267d2: Status 404 returned error can't find the container with id 05fa6f616f55f8d212dbcd94e554f22e5b9592d9b30c25cc900192bc1d7267d2 Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.503217 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.504876 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.508888 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bh7f2" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.509152 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.509281 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.509406 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.522315 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.622450 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fec6d8b2-4d43-4053-8028-747e6d28f7c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.622507 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec6d8b2-4d43-4053-8028-747e6d28f7c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.622532 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.622573 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.622625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fec6d8b2-4d43-4053-8028-747e6d28f7c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.622821 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.623175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knxq\" (UniqueName: \"kubernetes.io/projected/fec6d8b2-4d43-4053-8028-747e6d28f7c4-kube-api-access-4knxq\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.623214 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.658800 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.660233 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.662119 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-x547f" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.662910 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.666274 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.671472 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.724947 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3590400-4951-4e45-b479-bc2d31b92a57-config-data\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725007 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725060 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3590400-4951-4e45-b479-bc2d31b92a57-kolla-config\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725349 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4knxq\" (UniqueName: \"kubernetes.io/projected/fec6d8b2-4d43-4053-8028-747e6d28f7c4-kube-api-access-4knxq\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725466 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725510 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78tg\" (UniqueName: \"kubernetes.io/projected/d3590400-4951-4e45-b479-bc2d31b92a57-kube-api-access-t78tg\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725663 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3590400-4951-4e45-b479-bc2d31b92a57-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725742 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fec6d8b2-4d43-4053-8028-747e6d28f7c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec6d8b2-4d43-4053-8028-747e6d28f7c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.725887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.726028 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.726036 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.726550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fec6d8b2-4d43-4053-8028-747e6d28f7c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.727279 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.727697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fec6d8b2-4d43-4053-8028-747e6d28f7c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.727766 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3590400-4951-4e45-b479-bc2d31b92a57-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.728571 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fec6d8b2-4d43-4053-8028-747e6d28f7c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.736097 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.736148 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15300970b1ce21b6978a49dd9fdeee7e5f74fab22730d230153f4d9528088043/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.738177 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec6d8b2-4d43-4053-8028-747e6d28f7c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.747408 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4knxq\" (UniqueName: \"kubernetes.io/projected/fec6d8b2-4d43-4053-8028-747e6d28f7c4-kube-api-access-4knxq\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.748278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fec6d8b2-4d43-4053-8028-747e6d28f7c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.813435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5848c368-e71c-439d-bfca-f241813f9136","Type":"ContainerStarted","Data":"05fa6f616f55f8d212dbcd94e554f22e5b9592d9b30c25cc900192bc1d7267d2"} Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.813740 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e37f2f8-2f89-4a33-9563-d61d32aa73b5\") pod \"openstack-cell1-galera-0\" (UID: \"fec6d8b2-4d43-4053-8028-747e6d28f7c4\") " pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.831503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3590400-4951-4e45-b479-bc2d31b92a57-kolla-config\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.831576 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78tg\" (UniqueName: \"kubernetes.io/projected/d3590400-4951-4e45-b479-bc2d31b92a57-kube-api-access-t78tg\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.831664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3590400-4951-4e45-b479-bc2d31b92a57-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.831804 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3590400-4951-4e45-b479-bc2d31b92a57-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.831844 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3590400-4951-4e45-b479-bc2d31b92a57-config-data\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.833823 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d3590400-4951-4e45-b479-bc2d31b92a57-kolla-config\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.834206 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3590400-4951-4e45-b479-bc2d31b92a57-config-data\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.834527 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.838273 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3590400-4951-4e45-b479-bc2d31b92a57-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.857638 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78tg\" (UniqueName: \"kubernetes.io/projected/d3590400-4951-4e45-b479-bc2d31b92a57-kube-api-access-t78tg\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:11 crc kubenswrapper[4787]: I0219 19:39:11.860696 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3590400-4951-4e45-b479-bc2d31b92a57-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d3590400-4951-4e45-b479-bc2d31b92a57\") " pod="openstack/memcached-0" Feb 19 19:39:12 crc kubenswrapper[4787]: I0219 19:39:12.010025 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 19:39:12 crc kubenswrapper[4787]: I0219 19:39:12.387652 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 19:39:13 crc kubenswrapper[4787]: I0219 19:39:13.708595 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 19:39:13 crc kubenswrapper[4787]: W0219 19:39:13.809036 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3590400_4951_4e45_b479_bc2d31b92a57.slice/crio-0a3ca42b744eba65ed5492e5c9777fa664a53568ab3fa7f69c45d4787deb328a WatchSource:0}: Error finding container 0a3ca42b744eba65ed5492e5c9777fa664a53568ab3fa7f69c45d4787deb328a: Status 404 returned error can't find the container with id 0a3ca42b744eba65ed5492e5c9777fa664a53568ab3fa7f69c45d4787deb328a Feb 19 19:39:13 crc kubenswrapper[4787]: I0219 19:39:13.846493 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d3590400-4951-4e45-b479-bc2d31b92a57","Type":"ContainerStarted","Data":"0a3ca42b744eba65ed5492e5c9777fa664a53568ab3fa7f69c45d4787deb328a"} Feb 19 19:39:13 crc kubenswrapper[4787]: I0219 19:39:13.853290 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fec6d8b2-4d43-4053-8028-747e6d28f7c4","Type":"ContainerStarted","Data":"29b13d322ae2c30c1df3ca1262799877fc9164055f528cc90e91dd5d711dd4ea"} Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.357992 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.359521 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.369756 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nhbwb" Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.387105 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.457015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9xw\" (UniqueName: \"kubernetes.io/projected/a15222b2-deb2-46d1-a58d-d58d78228940-kube-api-access-df9xw\") pod \"kube-state-metrics-0\" (UID: \"a15222b2-deb2-46d1-a58d-d58d78228940\") " pod="openstack/kube-state-metrics-0" Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.561213 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9xw\" (UniqueName: \"kubernetes.io/projected/a15222b2-deb2-46d1-a58d-d58d78228940-kube-api-access-df9xw\") pod \"kube-state-metrics-0\" (UID: \"a15222b2-deb2-46d1-a58d-d58d78228940\") " pod="openstack/kube-state-metrics-0" Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.594488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9xw\" (UniqueName: \"kubernetes.io/projected/a15222b2-deb2-46d1-a58d-d58d78228940-kube-api-access-df9xw\") pod \"kube-state-metrics-0\" (UID: \"a15222b2-deb2-46d1-a58d-d58d78228940\") " pod="openstack/kube-state-metrics-0" Feb 19 19:39:14 crc kubenswrapper[4787]: I0219 19:39:14.723253 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.064914 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.066558 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.070929 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-75kbr" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.071099 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.102567 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.203972 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmsr\" (UniqueName: \"kubernetes.io/projected/191731f4-3080-4ae3-9aab-f44a30a33246-kube-api-access-pmmsr\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.204146 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/191731f4-3080-4ae3-9aab-f44a30a33246-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.308603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmsr\" (UniqueName: \"kubernetes.io/projected/191731f4-3080-4ae3-9aab-f44a30a33246-kube-api-access-pmmsr\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.308742 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/191731f4-3080-4ae3-9aab-f44a30a33246-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: E0219 19:39:15.308918 4787 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 19 19:39:15 crc kubenswrapper[4787]: E0219 19:39:15.308970 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/191731f4-3080-4ae3-9aab-f44a30a33246-serving-cert podName:191731f4-3080-4ae3-9aab-f44a30a33246 nodeName:}" failed. No retries permitted until 2026-02-19 19:39:15.808950256 +0000 UTC m=+1223.599616198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/191731f4-3080-4ae3-9aab-f44a30a33246-serving-cert") pod "observability-ui-dashboards-66cbf594b5-mjrdv" (UID: "191731f4-3080-4ae3-9aab-f44a30a33246") : secret "observability-ui-dashboards" not found Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.346115 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmsr\" (UniqueName: \"kubernetes.io/projected/191731f4-3080-4ae3-9aab-f44a30a33246-kube-api-access-pmmsr\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.488815 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.498120 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-967968ff4-947nc"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.506051 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.542665 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-967968ff4-947nc"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.566303 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.683997 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-oauth-serving-cert\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.684319 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e826d3df-6eed-485f-b12d-d1fdee12d975-console-serving-cert\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.684412 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-service-ca\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.684462 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h8wp\" (UniqueName: \"kubernetes.io/projected/e826d3df-6eed-485f-b12d-d1fdee12d975-kube-api-access-9h8wp\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.684485 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e826d3df-6eed-485f-b12d-d1fdee12d975-console-oauth-config\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.684508 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-trusted-ca-bundle\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.684551 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-console-config\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.695851 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.695988 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.702580 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.702810 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.703159 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.703303 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.715247 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.716129 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.716267 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ggzf2" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.752342 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.813749 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-console-config\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.813816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-oauth-serving-cert\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.813849 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e826d3df-6eed-485f-b12d-d1fdee12d975-console-serving-cert\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.813902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/191731f4-3080-4ae3-9aab-f44a30a33246-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.813947 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-service-ca\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.814003 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h8wp\" (UniqueName: \"kubernetes.io/projected/e826d3df-6eed-485f-b12d-d1fdee12d975-kube-api-access-9h8wp\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.814022 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e826d3df-6eed-485f-b12d-d1fdee12d975-console-oauth-config\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.814043 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-trusted-ca-bundle\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.815260 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-trusted-ca-bundle\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.815813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-service-ca\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.820454 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-oauth-serving-cert\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.821010 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e826d3df-6eed-485f-b12d-d1fdee12d975-console-config\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.831506 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e826d3df-6eed-485f-b12d-d1fdee12d975-console-oauth-config\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.835422 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/191731f4-3080-4ae3-9aab-f44a30a33246-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-mjrdv\" (UID: \"191731f4-3080-4ae3-9aab-f44a30a33246\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.839395 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e826d3df-6eed-485f-b12d-d1fdee12d975-console-serving-cert\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.858566 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h8wp\" (UniqueName: \"kubernetes.io/projected/e826d3df-6eed-485f-b12d-d1fdee12d975-kube-api-access-9h8wp\") pod \"console-967968ff4-947nc\" (UID: \"e826d3df-6eed-485f-b12d-d1fdee12d975\") " pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.877366 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918030 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918133 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918177 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918197 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918242 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918716 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918749 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nth6t\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-kube-api-access-nth6t\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918828 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.918888 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:15 crc kubenswrapper[4787]: I0219 19:39:15.927404 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a15222b2-deb2-46d1-a58d-d58d78228940","Type":"ContainerStarted","Data":"42c3060802ebf377be95e37d6cf0cfe485e867657daf6a306854baaecaf2d17d"} Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.002046 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020472 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020493 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020558 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020582 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nth6t\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-kube-api-access-nth6t\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020791 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.020827 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.021622 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.022149 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.024533 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.026733 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.031038 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.031087 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fb78e0f464557884d12dba3876706e0230d208030e57985f04fcfaeb4b1f767e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.031449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.035270 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.043234 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.046813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nth6t\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-kube-api-access-nth6t\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.053023 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.121866 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:16 crc kubenswrapper[4787]: I0219 19:39:16.340097 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.074674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n4c8f"] Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.078366 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.080823 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.081027 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-999fg" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.081201 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.108239 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n4c8f"] Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.121423 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-x42pw"] Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.123587 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.149171 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x42pw"] Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165531 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e04b88-069d-4c44-9511-ed765c0424ae-combined-ca-bundle\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e04b88-069d-4c44-9511-ed765c0424ae-scripts\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165635 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-run-ovn\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-run\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165715 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-log-ovn\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzm4\" (UniqueName: \"kubernetes.io/projected/19e04b88-069d-4c44-9511-ed765c0424ae-kube-api-access-crzm4\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.165833 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e04b88-069d-4c44-9511-ed765c0424ae-ovn-controller-tls-certs\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.177216 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv"] Feb 19 19:39:17 crc kubenswrapper[4787]: W0219 19:39:17.233923 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod191731f4_3080_4ae3_9aab_f44a30a33246.slice/crio-0de03e075593c2223a6f0264f89b5b13103ffe61e03de2918c04a64a2b502de8 WatchSource:0}: Error finding container 0de03e075593c2223a6f0264f89b5b13103ffe61e03de2918c04a64a2b502de8: Status 404 returned error can't find the container with id 0de03e075593c2223a6f0264f89b5b13103ffe61e03de2918c04a64a2b502de8 Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.267887 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-log\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.267950 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e04b88-069d-4c44-9511-ed765c0424ae-combined-ca-bundle\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268001 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e04b88-069d-4c44-9511-ed765c0424ae-scripts\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7km\" (UniqueName: \"kubernetes.io/projected/f94e933a-d230-409b-99ff-f47cf13a9638-kube-api-access-9q7km\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268067 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-run-ovn\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268084 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94e933a-d230-409b-99ff-f47cf13a9638-scripts\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268104 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-run\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268140 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-run\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-lib\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268197 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzm4\" (UniqueName: \"kubernetes.io/projected/19e04b88-069d-4c44-9511-ed765c0424ae-kube-api-access-crzm4\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268211 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-log-ovn\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268230 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-etc-ovs\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.268272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e04b88-069d-4c44-9511-ed765c0424ae-ovn-controller-tls-certs\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.269254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-run-ovn\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.269405 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-log-ovn\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.269517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19e04b88-069d-4c44-9511-ed765c0424ae-var-run\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.271136 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19e04b88-069d-4c44-9511-ed765c0424ae-scripts\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.289353 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19e04b88-069d-4c44-9511-ed765c0424ae-combined-ca-bundle\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.297270 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzm4\" (UniqueName: \"kubernetes.io/projected/19e04b88-069d-4c44-9511-ed765c0424ae-kube-api-access-crzm4\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.298297 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-967968ff4-947nc"] Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.303824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19e04b88-069d-4c44-9511-ed765c0424ae-ovn-controller-tls-certs\") pod \"ovn-controller-n4c8f\" (UID: \"19e04b88-069d-4c44-9511-ed765c0424ae\") " pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370446 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-log\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370507 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7km\" (UniqueName: \"kubernetes.io/projected/f94e933a-d230-409b-99ff-f47cf13a9638-kube-api-access-9q7km\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370532 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94e933a-d230-409b-99ff-f47cf13a9638-scripts\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370553 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-run\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370592 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-lib\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-etc-ovs\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-log\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-etc-ovs\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.370863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-run\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.371030 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f94e933a-d230-409b-99ff-f47cf13a9638-var-lib\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.373021 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f94e933a-d230-409b-99ff-f47cf13a9638-scripts\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.409981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.410236 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7km\" (UniqueName: \"kubernetes.io/projected/f94e933a-d230-409b-99ff-f47cf13a9638-kube-api-access-9q7km\") pod \"ovn-controller-ovs-x42pw\" (UID: \"f94e933a-d230-409b-99ff-f47cf13a9638\") " pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.473191 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.540925 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:39:17 crc kubenswrapper[4787]: W0219 19:39:17.667243 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ea71272_2b73_49c3_a5e2_57e7ac632a7f.slice/crio-c0d0e826aacf570a37c1b8bff11a3db940a0ff25fb452ecbba360abd3dd7d4c7 WatchSource:0}: Error finding container c0d0e826aacf570a37c1b8bff11a3db940a0ff25fb452ecbba360abd3dd7d4c7: Status 404 returned error can't find the container with id c0d0e826aacf570a37c1b8bff11a3db940a0ff25fb452ecbba360abd3dd7d4c7 Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.940261 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.942678 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.944852 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.945049 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.947164 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dcd9k" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.947364 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.947583 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 19:39:17 crc kubenswrapper[4787]: I0219 19:39:17.970244 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.003810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-967968ff4-947nc" event={"ID":"e826d3df-6eed-485f-b12d-d1fdee12d975","Type":"ContainerStarted","Data":"9b768c9894dd2625347f7633f1bc1b9754641ec1c236cdd36ec139bfc96e5eb3"} Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.008551 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" event={"ID":"191731f4-3080-4ae3-9aab-f44a30a33246","Type":"ContainerStarted","Data":"0de03e075593c2223a6f0264f89b5b13103ffe61e03de2918c04a64a2b502de8"} Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.032821 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerStarted","Data":"c0d0e826aacf570a37c1b8bff11a3db940a0ff25fb452ecbba360abd3dd7d4c7"} Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.090728 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.090782 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.090817 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.090862 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.090998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.091094 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.091145 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2mdt\" (UniqueName: \"kubernetes.io/projected/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-kube-api-access-z2mdt\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.091183 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193293 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193355 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193383 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193414 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193478 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193563 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.193627 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2mdt\" (UniqueName: \"kubernetes.io/projected/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-kube-api-access-z2mdt\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.195284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.195557 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.196182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.199566 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.200231 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.200372 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.255968 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2mdt\" (UniqueName: \"kubernetes.io/projected/2591ddd4-424a-4be8-ad04-62ad3e0a82a6-kube-api-access-z2mdt\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.256502 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.256545 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/09f958886b136609b8d8f0da0d52f51c9cbf8852a74f000d1f48c6891b94a2c9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.334699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2e580994-13cd-4ab9-86e9-d62fa351425c\") pod \"ovsdbserver-nb-0\" (UID: \"2591ddd4-424a-4be8-ad04-62ad3e0a82a6\") " pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:18 crc kubenswrapper[4787]: I0219 19:39:18.582682 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.256868 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.259777 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.263635 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.263806 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.263808 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ggvtf" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.263939 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.269787 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.363206 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.363327 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.363378 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.363450 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrgf\" (UniqueName: \"kubernetes.io/projected/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-kube-api-access-cqrgf\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.363470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.363507 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dcefb124-3235-4c22-924d-ee76180e991f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcefb124-3235-4c22-924d-ee76180e991f\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.364071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.364112 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.466795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrgf\" (UniqueName: \"kubernetes.io/projected/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-kube-api-access-cqrgf\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.466881 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dcefb124-3235-4c22-924d-ee76180e991f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcefb124-3235-4c22-924d-ee76180e991f\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.466907 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.467016 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.467295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.467528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.467588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.467626 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.468699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.470010 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-config\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.470929 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.477883 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.477926 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dcefb124-3235-4c22-924d-ee76180e991f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcefb124-3235-4c22-924d-ee76180e991f\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c0e213d8cf6aef532c88d020cd0c51e926639b455fd7438a7ae53456575f9f03/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.494385 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.500355 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.500712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrgf\" (UniqueName: \"kubernetes.io/projected/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-kube-api-access-cqrgf\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.500760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cd5d9e-251b-4f3f-9402-b19a7676c9a5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.545837 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dcefb124-3235-4c22-924d-ee76180e991f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcefb124-3235-4c22-924d-ee76180e991f\") pod \"ovsdbserver-sb-0\" (UID: \"20cd5d9e-251b-4f3f-9402-b19a7676c9a5\") " pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.585825 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:21 crc kubenswrapper[4787]: I0219 19:39:21.894442 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x42pw"] Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.316321 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-j5ggf"] Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.317520 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.324598 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.359576 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j5ggf"] Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.384911 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71fcf51-23db-4068-9690-1624d25948cb-combined-ca-bundle\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.384969 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f71fcf51-23db-4068-9690-1624d25948cb-ovn-rundir\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.384998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f71fcf51-23db-4068-9690-1624d25948cb-ovs-rundir\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.385038 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm86x\" (UniqueName: \"kubernetes.io/projected/f71fcf51-23db-4068-9690-1624d25948cb-kube-api-access-pm86x\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.385079 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71fcf51-23db-4068-9690-1624d25948cb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.385137 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71fcf51-23db-4068-9690-1624d25948cb-config\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71fcf51-23db-4068-9690-1624d25948cb-combined-ca-bundle\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486571 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f71fcf51-23db-4068-9690-1624d25948cb-ovn-rundir\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486594 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f71fcf51-23db-4068-9690-1624d25948cb-ovs-rundir\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm86x\" (UniqueName: \"kubernetes.io/projected/f71fcf51-23db-4068-9690-1624d25948cb-kube-api-access-pm86x\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486677 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71fcf51-23db-4068-9690-1624d25948cb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71fcf51-23db-4068-9690-1624d25948cb-config\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.486884 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f71fcf51-23db-4068-9690-1624d25948cb-ovn-rundir\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.487263 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f71fcf51-23db-4068-9690-1624d25948cb-ovs-rundir\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.487528 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f71fcf51-23db-4068-9690-1624d25948cb-config\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.492494 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71fcf51-23db-4068-9690-1624d25948cb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.493135 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71fcf51-23db-4068-9690-1624d25948cb-combined-ca-bundle\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.505315 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm86x\" (UniqueName: \"kubernetes.io/projected/f71fcf51-23db-4068-9690-1624d25948cb-kube-api-access-pm86x\") pod \"ovn-controller-metrics-j5ggf\" (UID: \"f71fcf51-23db-4068-9690-1624d25948cb\") " pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:22 crc kubenswrapper[4787]: I0219 19:39:22.685404 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j5ggf" Feb 19 19:39:24 crc kubenswrapper[4787]: I0219 19:39:24.094483 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x42pw" event={"ID":"f94e933a-d230-409b-99ff-f47cf13a9638","Type":"ContainerStarted","Data":"04a31a07b67f550c44ecddad07f2035bbc3722c8395e25948f9777756d68100f"} Feb 19 19:39:24 crc kubenswrapper[4787]: I0219 19:39:24.231601 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n4c8f"] Feb 19 19:39:24 crc kubenswrapper[4787]: W0219 19:39:24.743974 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19e04b88_069d_4c44_9511_ed765c0424ae.slice/crio-f4c57c60a5aaf721fe4440ca46587a182925d2d1045e725bee75f14ed0f2f410 WatchSource:0}: Error finding container f4c57c60a5aaf721fe4440ca46587a182925d2d1045e725bee75f14ed0f2f410: Status 404 returned error can't find the container with id f4c57c60a5aaf721fe4440ca46587a182925d2d1045e725bee75f14ed0f2f410 Feb 19 19:39:25 crc kubenswrapper[4787]: I0219 19:39:25.104288 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n4c8f" event={"ID":"19e04b88-069d-4c44-9511-ed765c0424ae","Type":"ContainerStarted","Data":"f4c57c60a5aaf721fe4440ca46587a182925d2d1045e725bee75f14ed0f2f410"} Feb 19 19:39:36 crc kubenswrapper[4787]: E0219 19:39:36.254453 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Feb 19 19:39:36 crc kubenswrapper[4787]: E0219 19:39:36.255135 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmmsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-mjrdv_openshift-operators(191731f4-3080-4ae3-9aab-f44a30a33246): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:39:36 crc kubenswrapper[4787]: E0219 19:39:36.256856 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" podUID="191731f4-3080-4ae3-9aab-f44a30a33246" Feb 19 19:39:36 crc kubenswrapper[4787]: E0219 19:39:36.969488 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 19 19:39:36 crc kubenswrapper[4787]: E0219 19:39:36.969733 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n685hchf6h9chdh77h65dh687h66fh589h654h697h656h574h667hfdh5b9h57bh56dh557hc9h78h5f4h67fhd4h645h67dh699h5d4h6ch595h67dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t78tg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d3590400-4951-4e45-b479-bc2d31b92a57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:36 crc kubenswrapper[4787]: E0219 19:39:36.970838 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d3590400-4951-4e45-b479-bc2d31b92a57" Feb 19 19:39:37 crc kubenswrapper[4787]: E0219 19:39:37.203547 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" podUID="191731f4-3080-4ae3-9aab-f44a30a33246" Feb 19 19:39:37 crc kubenswrapper[4787]: E0219 19:39:37.203569 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="d3590400-4951-4e45-b479-bc2d31b92a57" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.566689 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.567121 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfcfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(5848c368-e71c-439d-bfca-f241813f9136): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.567407 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.567491 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4knxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(fec6d8b2-4d43-4053-8028-747e6d28f7c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.568310 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.569006 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.613410 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.613803 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt65m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(769a015d-4883-474b-a4e8-45a2b77f2412): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.615325 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.643183 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.643378 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d554l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(278d26c1-8a7c-4278-b84c-0c0c24d81f52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:38 crc kubenswrapper[4787]: E0219 19:39:38.644961 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" Feb 19 19:39:39 crc kubenswrapper[4787]: I0219 19:39:39.060793 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 19:39:39 crc kubenswrapper[4787]: E0219 19:39:39.224247 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" Feb 19 19:39:39 crc kubenswrapper[4787]: E0219 19:39:39.227112 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" Feb 19 19:39:39 crc kubenswrapper[4787]: I0219 19:39:39.263092 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:39:39 crc kubenswrapper[4787]: I0219 19:39:39.263156 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:39:39 crc kubenswrapper[4787]: E0219 19:39:39.868987 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Feb 19 19:39:39 crc kubenswrapper[4787]: E0219 19:39:39.869508 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574hfhbfh556h6hddh66bh68dhb5hdchbdhcchf6h584h6hf6h57fh77h64bh96hb7hfbh5ch5cdh8bh79h559h594h547hdch678h54dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9q7km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-x42pw_openstack(f94e933a-d230-409b-99ff-f47cf13a9638): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:39 crc kubenswrapper[4787]: E0219 19:39:39.870959 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-x42pw" podUID="f94e933a-d230-409b-99ff-f47cf13a9638" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.235346 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-x42pw" podUID="f94e933a-d230-409b-99ff-f47cf13a9638" Feb 19 19:39:40 crc kubenswrapper[4787]: W0219 19:39:40.674625 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2591ddd4_424a_4be8_ad04_62ad3e0a82a6.slice/crio-2774b1d4cd6b3929bfdcd989652cfaaed69ed720ff2e1a222d74cbdd8852dc76 WatchSource:0}: Error finding container 2774b1d4cd6b3929bfdcd989652cfaaed69ed720ff2e1a222d74cbdd8852dc76: Status 404 returned error can't find the container with id 2774b1d4cd6b3929bfdcd989652cfaaed69ed720ff2e1a222d74cbdd8852dc76 Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.709037 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.709496 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6gbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(14da78cc-cd10-440d-9983-6e80d45f3e31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.710818 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.733569 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.733750 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xctbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-rmj5w_openstack(978a88dc-a643-4bc9-b46e-8c8e78f7b4cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.734952 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" podUID="978a88dc-a643-4bc9-b46e-8c8e78f7b4cc" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.739824 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.739984 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc9ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-s7pld_openstack(f7fd439a-6768-4d1c-8317-4d8cd31301bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.741172 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" podUID="f7fd439a-6768-4d1c-8317-4d8cd31301bb" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.746266 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.746501 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bt5sj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-x2n4w_openstack(39fb25a6-b1cf-4d60-b1eb-9ec62b143166): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.747655 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" podUID="39fb25a6-b1cf-4d60-b1eb-9ec62b143166" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.752504 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.752694 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxrn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vlx8z_openstack(120d3242-bcef-48c2-be1c-a82698775f07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:39:40 crc kubenswrapper[4787]: E0219 19:39:40.753977 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" podUID="120d3242-bcef-48c2-be1c-a82698775f07" Feb 19 19:39:41 crc kubenswrapper[4787]: I0219 19:39:41.243204 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2591ddd4-424a-4be8-ad04-62ad3e0a82a6","Type":"ContainerStarted","Data":"2774b1d4cd6b3929bfdcd989652cfaaed69ed720ff2e1a222d74cbdd8852dc76"} Feb 19 19:39:41 crc kubenswrapper[4787]: I0219 19:39:41.246691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-967968ff4-947nc" event={"ID":"e826d3df-6eed-485f-b12d-d1fdee12d975","Type":"ContainerStarted","Data":"692ee519871cbae87578af2a28e933789571a7f4b7a1469eaa358dfeb61aa1d7"} Feb 19 19:39:41 crc kubenswrapper[4787]: E0219 19:39:41.250292 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" podUID="39fb25a6-b1cf-4d60-b1eb-9ec62b143166" Feb 19 19:39:41 crc kubenswrapper[4787]: E0219 19:39:41.250294 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" podUID="f7fd439a-6768-4d1c-8317-4d8cd31301bb" Feb 19 19:39:41 crc kubenswrapper[4787]: I0219 19:39:41.345551 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-967968ff4-947nc" podStartSLOduration=26.345532064 podStartE2EDuration="26.345532064s" podCreationTimestamp="2026-02-19 19:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:39:41.342491206 +0000 UTC m=+1249.133157148" watchObservedRunningTime="2026-02-19 19:39:41.345532064 +0000 UTC m=+1249.136198006" Feb 19 19:39:42 crc kubenswrapper[4787]: I0219 19:39:42.677669 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 19:39:42 crc kubenswrapper[4787]: I0219 19:39:42.778960 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j5ggf"] Feb 19 19:39:43 crc kubenswrapper[4787]: W0219 19:39:43.160704 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cd5d9e_251b_4f3f_9402_b19a7676c9a5.slice/crio-70ec313bb5c7aa15d10a5b566df967fa3836c8598fea3b289e7901df3fbdec83 WatchSource:0}: Error finding container 70ec313bb5c7aa15d10a5b566df967fa3836c8598fea3b289e7901df3fbdec83: Status 404 returned error can't find the container with id 70ec313bb5c7aa15d10a5b566df967fa3836c8598fea3b289e7901df3fbdec83 Feb 19 19:39:43 crc kubenswrapper[4787]: E0219 19:39:43.161822 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 19:39:43 crc kubenswrapper[4787]: E0219 19:39:43.161858 4787 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 19 19:39:43 crc kubenswrapper[4787]: E0219 19:39:43.161994 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-df9xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(a15222b2-deb2-46d1-a58d-d58d78228940): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 19:39:43 crc kubenswrapper[4787]: E0219 19:39:43.163999 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.258141 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.264167 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.375425 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xctbj\" (UniqueName: \"kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj\") pod \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.375642 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-config\") pod \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\" (UID: \"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc\") " Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.375670 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-config\") pod \"120d3242-bcef-48c2-be1c-a82698775f07\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.375694 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxrn6\" (UniqueName: \"kubernetes.io/projected/120d3242-bcef-48c2-be1c-a82698775f07-kube-api-access-fxrn6\") pod \"120d3242-bcef-48c2-be1c-a82698775f07\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.375801 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-dns-svc\") pod \"120d3242-bcef-48c2-be1c-a82698775f07\" (UID: \"120d3242-bcef-48c2-be1c-a82698775f07\") " Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.376234 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-config" (OuterVolumeSpecName: "config") pod "978a88dc-a643-4bc9-b46e-8c8e78f7b4cc" (UID: "978a88dc-a643-4bc9-b46e-8c8e78f7b4cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.376320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-config" (OuterVolumeSpecName: "config") pod "120d3242-bcef-48c2-be1c-a82698775f07" (UID: "120d3242-bcef-48c2-be1c-a82698775f07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.376376 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "120d3242-bcef-48c2-be1c-a82698775f07" (UID: "120d3242-bcef-48c2-be1c-a82698775f07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.379582 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120d3242-bcef-48c2-be1c-a82698775f07-kube-api-access-fxrn6" (OuterVolumeSpecName: "kube-api-access-fxrn6") pod "120d3242-bcef-48c2-be1c-a82698775f07" (UID: "120d3242-bcef-48c2-be1c-a82698775f07"). InnerVolumeSpecName "kube-api-access-fxrn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.387578 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj" (OuterVolumeSpecName: "kube-api-access-xctbj") pod "978a88dc-a643-4bc9-b46e-8c8e78f7b4cc" (UID: "978a88dc-a643-4bc9-b46e-8c8e78f7b4cc"). InnerVolumeSpecName "kube-api-access-xctbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.478757 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.478784 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.478794 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxrn6\" (UniqueName: \"kubernetes.io/projected/120d3242-bcef-48c2-be1c-a82698775f07-kube-api-access-fxrn6\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.478804 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/120d3242-bcef-48c2-be1c-a82698775f07-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.478813 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xctbj\" (UniqueName: \"kubernetes.io/projected/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc-kube-api-access-xctbj\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.632741 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.632739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vlx8z" event={"ID":"120d3242-bcef-48c2-be1c-a82698775f07","Type":"ContainerDied","Data":"2b2d05425c3f328517eb714c43348bcf6c50fd2b0af85114b6889062de0fbf3a"} Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.640391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"20cd5d9e-251b-4f3f-9402-b19a7676c9a5","Type":"ContainerStarted","Data":"70ec313bb5c7aa15d10a5b566df967fa3836c8598fea3b289e7901df3fbdec83"} Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.642188 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" event={"ID":"978a88dc-a643-4bc9-b46e-8c8e78f7b4cc","Type":"ContainerDied","Data":"201a51f4c0fd91d7e036740b7b3dbbb99c347a8deaa607c577710722df295af5"} Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.642289 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-rmj5w" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.648566 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j5ggf" event={"ID":"f71fcf51-23db-4068-9690-1624d25948cb","Type":"ContainerStarted","Data":"029cd97d88ad460ec0b38697ce94eac469601081276cd6d7bb7e2d6a689ea115"} Feb 19 19:39:43 crc kubenswrapper[4787]: E0219 19:39:43.652959 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.711638 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vlx8z"] Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.719556 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vlx8z"] Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.744668 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rmj5w"] Feb 19 19:39:43 crc kubenswrapper[4787]: I0219 19:39:43.752883 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-rmj5w"] Feb 19 19:39:44 crc kubenswrapper[4787]: I0219 19:39:44.658871 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2591ddd4-424a-4be8-ad04-62ad3e0a82a6","Type":"ContainerStarted","Data":"6d0eb0f10132c129f49bbca90db68f5d4d9773cb04fac54d6555e26251859532"} Feb 19 19:39:44 crc kubenswrapper[4787]: I0219 19:39:44.660729 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n4c8f" event={"ID":"19e04b88-069d-4c44-9511-ed765c0424ae","Type":"ContainerStarted","Data":"31888814cafe0c7155e1feeab69d12d8ac730be00983f9e9eff9525657a919ce"} Feb 19 19:39:44 crc kubenswrapper[4787]: I0219 19:39:44.660958 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n4c8f" Feb 19 19:39:44 crc kubenswrapper[4787]: I0219 19:39:44.689034 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n4c8f" podStartSLOduration=9.316217907 podStartE2EDuration="27.689017722s" podCreationTimestamp="2026-02-19 19:39:17 +0000 UTC" firstStartedPulling="2026-02-19 19:39:24.793428774 +0000 UTC m=+1232.584094716" lastFinishedPulling="2026-02-19 19:39:43.166228589 +0000 UTC m=+1250.956894531" observedRunningTime="2026-02-19 19:39:44.684281066 +0000 UTC m=+1252.474947028" watchObservedRunningTime="2026-02-19 19:39:44.689017722 +0000 UTC m=+1252.479683664" Feb 19 19:39:44 crc kubenswrapper[4787]: I0219 19:39:44.908723 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120d3242-bcef-48c2-be1c-a82698775f07" path="/var/lib/kubelet/pods/120d3242-bcef-48c2-be1c-a82698775f07/volumes" Feb 19 19:39:44 crc kubenswrapper[4787]: I0219 19:39:44.909960 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="978a88dc-a643-4bc9-b46e-8c8e78f7b4cc" path="/var/lib/kubelet/pods/978a88dc-a643-4bc9-b46e-8c8e78f7b4cc/volumes" Feb 19 19:39:45 crc kubenswrapper[4787]: I0219 19:39:45.675150 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80458aec-a844-4f4d-b618-56bdc811cd43","Type":"ContainerStarted","Data":"b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8"} Feb 19 19:39:45 crc kubenswrapper[4787]: I0219 19:39:45.677464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14da78cc-cd10-440d-9983-6e80d45f3e31","Type":"ContainerStarted","Data":"d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9"} Feb 19 19:39:45 crc kubenswrapper[4787]: I0219 19:39:45.678770 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"278d26c1-8a7c-4278-b84c-0c0c24d81f52","Type":"ContainerStarted","Data":"b981e8e43871a722314887a58a1a921805ea8edc9b175836152b862fcf86e7c3"} Feb 19 19:39:45 crc kubenswrapper[4787]: I0219 19:39:45.879098 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:45 crc kubenswrapper[4787]: I0219 19:39:45.879159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:45 crc kubenswrapper[4787]: I0219 19:39:45.885572 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.689012 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"20cd5d9e-251b-4f3f-9402-b19a7676c9a5","Type":"ContainerStarted","Data":"c8c56a37c142e1e5aeb6380c10c80d3b6effb5dbe9d1ba481137272e255b4531"} Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.689583 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"20cd5d9e-251b-4f3f-9402-b19a7676c9a5","Type":"ContainerStarted","Data":"1255318827f30cb6c8c8bf519371f0df369e54c6ec84570a38596cad63ddc123"} Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.690474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j5ggf" event={"ID":"f71fcf51-23db-4068-9690-1624d25948cb","Type":"ContainerStarted","Data":"52b4829c3b88ac74a5d063fc096cb840643f730754510843431adb77e86fd9a3"} Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.692020 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"769a015d-4883-474b-a4e8-45a2b77f2412","Type":"ContainerStarted","Data":"02ff1da193c93f126feece115773dd2392f6318ce735af54242365046cdaebba"} Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.694228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2591ddd4-424a-4be8-ad04-62ad3e0a82a6","Type":"ContainerStarted","Data":"22d55500bfadbdc1123e077de9fff3813d7e4a37943f8f95edd466d55ff19a07"} Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.698768 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-967968ff4-947nc" Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.712946 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.891749567 podStartE2EDuration="26.712920611s" podCreationTimestamp="2026-02-19 19:39:20 +0000 UTC" firstStartedPulling="2026-02-19 19:39:43.179050368 +0000 UTC m=+1250.969716310" lastFinishedPulling="2026-02-19 19:39:46.000221392 +0000 UTC m=+1253.790887354" observedRunningTime="2026-02-19 19:39:46.706472675 +0000 UTC m=+1254.497138617" watchObservedRunningTime="2026-02-19 19:39:46.712920611 +0000 UTC m=+1254.503586553" Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.805386 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bd66c8fd6-b6vcd"] Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.832760 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.460481504 podStartE2EDuration="30.832741352s" podCreationTimestamp="2026-02-19 19:39:16 +0000 UTC" firstStartedPulling="2026-02-19 19:39:40.689653081 +0000 UTC m=+1248.480319023" lastFinishedPulling="2026-02-19 19:39:46.061912929 +0000 UTC m=+1253.852578871" observedRunningTime="2026-02-19 19:39:46.830291132 +0000 UTC m=+1254.620957074" watchObservedRunningTime="2026-02-19 19:39:46.832741352 +0000 UTC m=+1254.623407294" Feb 19 19:39:46 crc kubenswrapper[4787]: I0219 19:39:46.857634 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-j5ggf" podStartSLOduration=21.975769837 podStartE2EDuration="24.857600628s" podCreationTimestamp="2026-02-19 19:39:22 +0000 UTC" firstStartedPulling="2026-02-19 19:39:43.179049388 +0000 UTC m=+1250.969715330" lastFinishedPulling="2026-02-19 19:39:46.060880179 +0000 UTC m=+1253.851546121" observedRunningTime="2026-02-19 19:39:46.851854563 +0000 UTC m=+1254.642520515" watchObservedRunningTime="2026-02-19 19:39:46.857600628 +0000 UTC m=+1254.648266570" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.253256 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s7pld"] Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.311291 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5klnp"] Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.313340 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.316141 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.360999 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5klnp"] Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.370297 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.370525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-config\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.370645 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsj2b\" (UniqueName: \"kubernetes.io/projected/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-kube-api-access-vsj2b\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.370869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.476343 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-config\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.476406 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsj2b\" (UniqueName: \"kubernetes.io/projected/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-kube-api-access-vsj2b\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.476569 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.476652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.477241 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-config\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.477397 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.477939 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.531850 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsj2b\" (UniqueName: \"kubernetes.io/projected/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-kube-api-access-vsj2b\") pod \"dnsmasq-dns-7fd796d7df-5klnp\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.590350 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-x2n4w"] Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.616064 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vn96g"] Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.637289 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.645670 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.649197 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vn96g"] Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.650390 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.685906 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-config\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.686521 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.686686 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.686798 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.686933 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rfb\" (UniqueName: \"kubernetes.io/projected/e629dc49-6c0f-43b2-844c-88658e0dd5ac-kube-api-access-j6rfb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.726903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerStarted","Data":"d35f0c51c1a567437a372c4879730c992c298e937698a29a921fbc5e015a9771"} Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.791048 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.791851 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.791912 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.792002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6rfb\" (UniqueName: \"kubernetes.io/projected/e629dc49-6c0f-43b2-844c-88658e0dd5ac-kube-api-access-j6rfb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.801073 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-config\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.803843 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.804357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-config\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.805382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.805479 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.839177 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6rfb\" (UniqueName: \"kubernetes.io/projected/e629dc49-6c0f-43b2-844c-88658e0dd5ac-kube-api-access-j6rfb\") pod \"dnsmasq-dns-86db49b7ff-vn96g\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.840887 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:47 crc kubenswrapper[4787]: I0219 19:39:47.994064 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.003895 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9ks\" (UniqueName: \"kubernetes.io/projected/f7fd439a-6768-4d1c-8317-4d8cd31301bb-kube-api-access-zc9ks\") pod \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.004036 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-dns-svc\") pod \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.004077 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-config\") pod \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\" (UID: \"f7fd439a-6768-4d1c-8317-4d8cd31301bb\") " Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.006120 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-config" (OuterVolumeSpecName: "config") pod "f7fd439a-6768-4d1c-8317-4d8cd31301bb" (UID: "f7fd439a-6768-4d1c-8317-4d8cd31301bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.007786 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7fd439a-6768-4d1c-8317-4d8cd31301bb" (UID: "f7fd439a-6768-4d1c-8317-4d8cd31301bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.014179 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fd439a-6768-4d1c-8317-4d8cd31301bb-kube-api-access-zc9ks" (OuterVolumeSpecName: "kube-api-access-zc9ks") pod "f7fd439a-6768-4d1c-8317-4d8cd31301bb" (UID: "f7fd439a-6768-4d1c-8317-4d8cd31301bb"). InnerVolumeSpecName "kube-api-access-zc9ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.120331 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.120374 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7fd439a-6768-4d1c-8317-4d8cd31301bb-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.120391 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9ks\" (UniqueName: \"kubernetes.io/projected/f7fd439a-6768-4d1c-8317-4d8cd31301bb-kube-api-access-zc9ks\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.135819 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.263148 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5klnp"] Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.330499 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-config\") pod \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.330917 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-config" (OuterVolumeSpecName: "config") pod "39fb25a6-b1cf-4d60-b1eb-9ec62b143166" (UID: "39fb25a6-b1cf-4d60-b1eb-9ec62b143166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.331069 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-dns-svc\") pod \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.331142 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5sj\" (UniqueName: \"kubernetes.io/projected/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-kube-api-access-bt5sj\") pod \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\" (UID: \"39fb25a6-b1cf-4d60-b1eb-9ec62b143166\") " Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.331304 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39fb25a6-b1cf-4d60-b1eb-9ec62b143166" (UID: "39fb25a6-b1cf-4d60-b1eb-9ec62b143166"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.340512 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.340638 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.345882 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-kube-api-access-bt5sj" (OuterVolumeSpecName: "kube-api-access-bt5sj") pod "39fb25a6-b1cf-4d60-b1eb-9ec62b143166" (UID: "39fb25a6-b1cf-4d60-b1eb-9ec62b143166"). InnerVolumeSpecName "kube-api-access-bt5sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.442382 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5sj\" (UniqueName: \"kubernetes.io/projected/39fb25a6-b1cf-4d60-b1eb-9ec62b143166-kube-api-access-bt5sj\") on node \"crc\" DevicePath \"\"" Feb 19 19:39:48 crc kubenswrapper[4787]: W0219 19:39:48.472480 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode629dc49_6c0f_43b2_844c_88658e0dd5ac.slice/crio-22460d1c3bc38956dc2c43260602431b6ba6b8380e860929b52ea3a1be8ac02d WatchSource:0}: Error finding container 22460d1c3bc38956dc2c43260602431b6ba6b8380e860929b52ea3a1be8ac02d: Status 404 returned error can't find the container with id 22460d1c3bc38956dc2c43260602431b6ba6b8380e860929b52ea3a1be8ac02d Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.479925 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vn96g"] Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.583307 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.583527 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.585918 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.628050 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.733185 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.733220 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-s7pld" event={"ID":"f7fd439a-6768-4d1c-8317-4d8cd31301bb","Type":"ContainerDied","Data":"1d9d1bb6c88086e3649b60baa3c70a10e0dd83a36add9a3b65e852e3774b8b6e"} Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.734270 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" event={"ID":"39fb25a6-b1cf-4d60-b1eb-9ec62b143166","Type":"ContainerDied","Data":"a829855d755388dfd58eeb9b991905affa7506b9434f606a76fc1804013f9938"} Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.734369 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-x2n4w" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.735302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" event={"ID":"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e","Type":"ContainerStarted","Data":"2607d664fd869e860398448fa8b361f34fc1ec46158a4f9ef590cf4db0a17410"} Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.743536 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" event={"ID":"e629dc49-6c0f-43b2-844c-88658e0dd5ac","Type":"ContainerStarted","Data":"22460d1c3bc38956dc2c43260602431b6ba6b8380e860929b52ea3a1be8ac02d"} Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.809328 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.817533 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s7pld"] Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.837794 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-s7pld"] Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.871069 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-x2n4w"] Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.887832 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-x2n4w"] Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.910912 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39fb25a6-b1cf-4d60-b1eb-9ec62b143166" path="/var/lib/kubelet/pods/39fb25a6-b1cf-4d60-b1eb-9ec62b143166/volumes" Feb 19 19:39:48 crc kubenswrapper[4787]: I0219 19:39:48.911330 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fd439a-6768-4d1c-8317-4d8cd31301bb" path="/var/lib/kubelet/pods/f7fd439a-6768-4d1c-8317-4d8cd31301bb/volumes" Feb 19 19:39:49 crc kubenswrapper[4787]: I0219 19:39:49.783785 4787 generic.go:334] "Generic (PLEG): container finished" podID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerID="8cd5e95656c610ebff93dc374c9c9507e5c640fc51cef85541ba62f7f1617fba" exitCode=0 Feb 19 19:39:49 crc kubenswrapper[4787]: I0219 19:39:49.784334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" event={"ID":"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e","Type":"ContainerDied","Data":"8cd5e95656c610ebff93dc374c9c9507e5c640fc51cef85541ba62f7f1617fba"} Feb 19 19:39:49 crc kubenswrapper[4787]: I0219 19:39:49.789755 4787 generic.go:334] "Generic (PLEG): container finished" podID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerID="0a670c55f01c079f50c57c070c844bffc4d20c696128d5a63cf4c5018a0e8622" exitCode=0 Feb 19 19:39:49 crc kubenswrapper[4787]: I0219 19:39:49.789857 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" event={"ID":"e629dc49-6c0f-43b2-844c-88658e0dd5ac","Type":"ContainerDied","Data":"0a670c55f01c079f50c57c070c844bffc4d20c696128d5a63cf4c5018a0e8622"} Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.804065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" event={"ID":"e629dc49-6c0f-43b2-844c-88658e0dd5ac","Type":"ContainerStarted","Data":"3e2ca2fb541d0ab37f4bb376032c7651987295414eacedc7f13b79c782afb86c"} Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.804682 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.807446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" event={"ID":"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e","Type":"ContainerStarted","Data":"195e388eaacc3333943269be7737ef83af138e436badc334e6b360f1022de16b"} Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.807696 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.809529 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5848c368-e71c-439d-bfca-f241813f9136","Type":"ContainerStarted","Data":"dfd66e27e9d5a15f9faca2e61d8e0942241f0c429b70da22db79d5c79bfefd4f"} Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.833833 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" podStartSLOduration=3.396487016 podStartE2EDuration="3.833812843s" podCreationTimestamp="2026-02-19 19:39:47 +0000 UTC" firstStartedPulling="2026-02-19 19:39:48.474521343 +0000 UTC m=+1256.265187285" lastFinishedPulling="2026-02-19 19:39:48.91184717 +0000 UTC m=+1256.702513112" observedRunningTime="2026-02-19 19:39:50.823165286 +0000 UTC m=+1258.613831228" watchObservedRunningTime="2026-02-19 19:39:50.833812843 +0000 UTC m=+1258.624478785" Feb 19 19:39:50 crc kubenswrapper[4787]: I0219 19:39:50.844534 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" podStartSLOduration=3.365256816 podStartE2EDuration="3.844519851s" podCreationTimestamp="2026-02-19 19:39:47 +0000 UTC" firstStartedPulling="2026-02-19 19:39:48.277817417 +0000 UTC m=+1256.068483359" lastFinishedPulling="2026-02-19 19:39:48.757080452 +0000 UTC m=+1256.547746394" observedRunningTime="2026-02-19 19:39:50.840497705 +0000 UTC m=+1258.631163647" watchObservedRunningTime="2026-02-19 19:39:50.844519851 +0000 UTC m=+1258.635185793" Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.586076 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.626194 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.819686 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d3590400-4951-4e45-b479-bc2d31b92a57","Type":"ContainerStarted","Data":"f10ca1a58d8a1cf421e4bef9fc341eb35ab9a4549c9b127e0f89823f73d7dd0d"} Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.820571 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.821309 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fec6d8b2-4d43-4053-8028-747e6d28f7c4","Type":"ContainerStarted","Data":"91f90c98374a5938523ad6282ee3675bacf01f31271e1e554d4fb802e9fbfde4"} Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.823435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" event={"ID":"191731f4-3080-4ae3-9aab-f44a30a33246","Type":"ContainerStarted","Data":"ac4bf4c837bc31b31dbfcd54723936d33a1fb0634f107e4c9882e76ffc97ca87"} Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.840629 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.872987529 podStartE2EDuration="40.840593452s" podCreationTimestamp="2026-02-19 19:39:11 +0000 UTC" firstStartedPulling="2026-02-19 19:39:13.822943673 +0000 UTC m=+1221.613609615" lastFinishedPulling="2026-02-19 19:39:50.790549596 +0000 UTC m=+1258.581215538" observedRunningTime="2026-02-19 19:39:51.838010058 +0000 UTC m=+1259.628676000" watchObservedRunningTime="2026-02-19 19:39:51.840593452 +0000 UTC m=+1259.631259394" Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.866456 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 19:39:51 crc kubenswrapper[4787]: I0219 19:39:51.882858 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-mjrdv" podStartSLOduration=3.481300305 podStartE2EDuration="36.882837649s" podCreationTimestamp="2026-02-19 19:39:15 +0000 UTC" firstStartedPulling="2026-02-19 19:39:17.238164447 +0000 UTC m=+1225.028830389" lastFinishedPulling="2026-02-19 19:39:50.639701791 +0000 UTC m=+1258.430367733" observedRunningTime="2026-02-19 19:39:51.876997051 +0000 UTC m=+1259.667663003" watchObservedRunningTime="2026-02-19 19:39:51.882837649 +0000 UTC m=+1259.673503591" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.038821 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.041411 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.046460 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.046766 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.046997 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.047717 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wsjx9" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.060232 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.130627 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.130706 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/765d10d6-7924-425b-b553-6e921dc89049-config\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.130760 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.130798 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.131226 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/765d10d6-7924-425b-b553-6e921dc89049-scripts\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.131397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/765d10d6-7924-425b-b553-6e921dc89049-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.131505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbhm\" (UniqueName: \"kubernetes.io/projected/765d10d6-7924-425b-b553-6e921dc89049-kube-api-access-zhbhm\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/765d10d6-7924-425b-b553-6e921dc89049-config\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233573 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/765d10d6-7924-425b-b553-6e921dc89049-scripts\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233746 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/765d10d6-7924-425b-b553-6e921dc89049-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.233777 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbhm\" (UniqueName: \"kubernetes.io/projected/765d10d6-7924-425b-b553-6e921dc89049-kube-api-access-zhbhm\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.234473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/765d10d6-7924-425b-b553-6e921dc89049-config\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.234737 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/765d10d6-7924-425b-b553-6e921dc89049-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.235075 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/765d10d6-7924-425b-b553-6e921dc89049-scripts\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.239193 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.239771 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.240087 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/765d10d6-7924-425b-b553-6e921dc89049-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.249942 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbhm\" (UniqueName: \"kubernetes.io/projected/765d10d6-7924-425b-b553-6e921dc89049-kube-api-access-zhbhm\") pod \"ovn-northd-0\" (UID: \"765d10d6-7924-425b-b553-6e921dc89049\") " pod="openstack/ovn-northd-0" Feb 19 19:39:52 crc kubenswrapper[4787]: I0219 19:39:52.375469 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 19:39:53 crc kubenswrapper[4787]: I0219 19:39:52.926988 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 19:39:53 crc kubenswrapper[4787]: I0219 19:39:53.863597 4787 generic.go:334] "Generic (PLEG): container finished" podID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerID="d35f0c51c1a567437a372c4879730c992c298e937698a29a921fbc5e015a9771" exitCode=0 Feb 19 19:39:53 crc kubenswrapper[4787]: I0219 19:39:53.863854 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerDied","Data":"d35f0c51c1a567437a372c4879730c992c298e937698a29a921fbc5e015a9771"} Feb 19 19:39:53 crc kubenswrapper[4787]: I0219 19:39:53.865978 4787 generic.go:334] "Generic (PLEG): container finished" podID="f94e933a-d230-409b-99ff-f47cf13a9638" containerID="4be09fc8e09909fe60443dc2c7613a2a8f5ef555b44f75c263eec944ddaf938a" exitCode=0 Feb 19 19:39:53 crc kubenswrapper[4787]: I0219 19:39:53.866296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x42pw" event={"ID":"f94e933a-d230-409b-99ff-f47cf13a9638","Type":"ContainerDied","Data":"4be09fc8e09909fe60443dc2c7613a2a8f5ef555b44f75c263eec944ddaf938a"} Feb 19 19:39:53 crc kubenswrapper[4787]: I0219 19:39:53.870961 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"765d10d6-7924-425b-b553-6e921dc89049","Type":"ContainerStarted","Data":"bbf9ab4c11d1f68858f200927c9c17d202466586ddafe8ac7afd9ad5bff3a630"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.881122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x42pw" event={"ID":"f94e933a-d230-409b-99ff-f47cf13a9638","Type":"ContainerStarted","Data":"f0d816f36b2e8d9b909a73f7e39c016d4e821fdb0905864c807deb6d06b780ea"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.881438 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x42pw" event={"ID":"f94e933a-d230-409b-99ff-f47cf13a9638","Type":"ContainerStarted","Data":"a795f4ea42e8afe279a21376854daf9ca59da2abe78d3ad2e70a1739ad649cec"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.881614 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.881643 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.883578 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"765d10d6-7924-425b-b553-6e921dc89049","Type":"ContainerStarted","Data":"021ade59cb92bacb7688c9b17030cba930672202730798e6a98f028f77376d0b"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.883615 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"765d10d6-7924-425b-b553-6e921dc89049","Type":"ContainerStarted","Data":"14b461d0fed0ae7ee40cfe0258901e34509fcbfc0220df3c3a56a1a2b4c4fbb6"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.883716 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.886242 4787 generic.go:334] "Generic (PLEG): container finished" podID="5848c368-e71c-439d-bfca-f241813f9136" containerID="dfd66e27e9d5a15f9faca2e61d8e0942241f0c429b70da22db79d5c79bfefd4f" exitCode=0 Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.886301 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5848c368-e71c-439d-bfca-f241813f9136","Type":"ContainerDied","Data":"dfd66e27e9d5a15f9faca2e61d8e0942241f0c429b70da22db79d5c79bfefd4f"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.887682 4787 generic.go:334] "Generic (PLEG): container finished" podID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerID="91f90c98374a5938523ad6282ee3675bacf01f31271e1e554d4fb802e9fbfde4" exitCode=0 Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.887706 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fec6d8b2-4d43-4053-8028-747e6d28f7c4","Type":"ContainerDied","Data":"91f90c98374a5938523ad6282ee3675bacf01f31271e1e554d4fb802e9fbfde4"} Feb 19 19:39:54 crc kubenswrapper[4787]: I0219 19:39:54.918145 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-x42pw" podStartSLOduration=9.205013993 podStartE2EDuration="37.918125429s" podCreationTimestamp="2026-02-19 19:39:17 +0000 UTC" firstStartedPulling="2026-02-19 19:39:23.740036811 +0000 UTC m=+1231.530702763" lastFinishedPulling="2026-02-19 19:39:52.453148257 +0000 UTC m=+1260.243814199" observedRunningTime="2026-02-19 19:39:54.909672546 +0000 UTC m=+1262.700338488" watchObservedRunningTime="2026-02-19 19:39:54.918125429 +0000 UTC m=+1262.708791371" Feb 19 19:39:55 crc kubenswrapper[4787]: I0219 19:39:55.905574 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5848c368-e71c-439d-bfca-f241813f9136","Type":"ContainerStarted","Data":"ecc65d51d2df1ace550269a8befb7e963846b0b2777e2f062fb718ede5377d48"} Feb 19 19:39:55 crc kubenswrapper[4787]: I0219 19:39:55.911805 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fec6d8b2-4d43-4053-8028-747e6d28f7c4","Type":"ContainerStarted","Data":"5baf6aff915f430780af9e288041e5e98fa43631f9a95254e345d55b6ad290d2"} Feb 19 19:39:55 crc kubenswrapper[4787]: I0219 19:39:55.926679 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.081262738 podStartE2EDuration="3.92666419s" podCreationTimestamp="2026-02-19 19:39:52 +0000 UTC" firstStartedPulling="2026-02-19 19:39:52.929086866 +0000 UTC m=+1260.719752808" lastFinishedPulling="2026-02-19 19:39:53.774488318 +0000 UTC m=+1261.565154260" observedRunningTime="2026-02-19 19:39:54.986694464 +0000 UTC m=+1262.777360406" watchObservedRunningTime="2026-02-19 19:39:55.92666419 +0000 UTC m=+1263.717330132" Feb 19 19:39:55 crc kubenswrapper[4787]: I0219 19:39:55.931937 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.883579665 podStartE2EDuration="46.931922491s" podCreationTimestamp="2026-02-19 19:39:09 +0000 UTC" firstStartedPulling="2026-02-19 19:39:11.277016941 +0000 UTC m=+1219.067682883" lastFinishedPulling="2026-02-19 19:39:50.325359767 +0000 UTC m=+1258.116025709" observedRunningTime="2026-02-19 19:39:55.924046755 +0000 UTC m=+1263.714712697" watchObservedRunningTime="2026-02-19 19:39:55.931922491 +0000 UTC m=+1263.722588433" Feb 19 19:39:55 crc kubenswrapper[4787]: I0219 19:39:55.943479 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371990.911314 podStartE2EDuration="45.943460904s" podCreationTimestamp="2026-02-19 19:39:10 +0000 UTC" firstStartedPulling="2026-02-19 19:39:13.076373018 +0000 UTC m=+1220.867038960" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:39:55.94296372 +0000 UTC m=+1263.733629662" watchObservedRunningTime="2026-02-19 19:39:55.943460904 +0000 UTC m=+1263.734126846" Feb 19 19:39:56 crc kubenswrapper[4787]: E0219 19:39:56.035090 4787 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:43690->38.102.83.150:35215: write tcp 38.102.83.150:43690->38.102.83.150:35215: write: broken pipe Feb 19 19:39:57 crc kubenswrapper[4787]: I0219 19:39:57.014852 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 19:39:57 crc kubenswrapper[4787]: I0219 19:39:57.639784 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:39:57 crc kubenswrapper[4787]: I0219 19:39:57.995535 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:39:58 crc kubenswrapper[4787]: I0219 19:39:58.052188 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5klnp"] Feb 19 19:39:58 crc kubenswrapper[4787]: I0219 19:39:58.053378 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerName="dnsmasq-dns" containerID="cri-o://195e388eaacc3333943269be7737ef83af138e436badc334e6b360f1022de16b" gracePeriod=10 Feb 19 19:39:58 crc kubenswrapper[4787]: I0219 19:39:58.950303 4787 generic.go:334] "Generic (PLEG): container finished" podID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerID="195e388eaacc3333943269be7737ef83af138e436badc334e6b360f1022de16b" exitCode=0 Feb 19 19:39:58 crc kubenswrapper[4787]: I0219 19:39:58.950361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" event={"ID":"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e","Type":"ContainerDied","Data":"195e388eaacc3333943269be7737ef83af138e436badc334e6b360f1022de16b"} Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.407110 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.407919 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.703455 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.712699 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-dns-svc\") pod \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.713146 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-config\") pod \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.713202 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-ovsdbserver-nb\") pod \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.713258 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsj2b\" (UniqueName: \"kubernetes.io/projected/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-kube-api-access-vsj2b\") pod \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\" (UID: \"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e\") " Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.728793 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-kube-api-access-vsj2b" (OuterVolumeSpecName: "kube-api-access-vsj2b") pod "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" (UID: "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e"). InnerVolumeSpecName "kube-api-access-vsj2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.777005 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-config" (OuterVolumeSpecName: "config") pod "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" (UID: "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.798771 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" (UID: "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.816558 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.816590 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.816629 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsj2b\" (UniqueName: \"kubernetes.io/projected/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-kube-api-access-vsj2b\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.819631 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" (UID: "e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.918592 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.971116 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a15222b2-deb2-46d1-a58d-d58d78228940","Type":"ContainerStarted","Data":"85354daa3da10b50cc1e08a7f157ecf02d9d1ea422d613d618253cf52e8f86b0"} Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.971392 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.976030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" event={"ID":"e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e","Type":"ContainerDied","Data":"2607d664fd869e860398448fa8b361f34fc1ec46158a4f9ef590cf4db0a17410"} Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.976092 4787 scope.go:117] "RemoveContainer" containerID="195e388eaacc3333943269be7737ef83af138e436badc334e6b360f1022de16b" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.976236 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5klnp" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.985855 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerStarted","Data":"2db65f035436118b6796dbab1e9c7b0c372675ae01fa9c36fcd6cf9b0cfb0e8a"} Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.996848 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.814198221 podStartE2EDuration="46.996821785s" podCreationTimestamp="2026-02-19 19:39:14 +0000 UTC" firstStartedPulling="2026-02-19 19:39:15.520317285 +0000 UTC m=+1223.310983227" lastFinishedPulling="2026-02-19 19:40:00.702940849 +0000 UTC m=+1268.493606791" observedRunningTime="2026-02-19 19:40:00.988534516 +0000 UTC m=+1268.779200458" watchObservedRunningTime="2026-02-19 19:40:00.996821785 +0000 UTC m=+1268.787487747" Feb 19 19:40:00 crc kubenswrapper[4787]: I0219 19:40:00.999858 4787 scope.go:117] "RemoveContainer" containerID="8cd5e95656c610ebff93dc374c9c9507e5c640fc51cef85541ba62f7f1617fba" Feb 19 19:40:01 crc kubenswrapper[4787]: I0219 19:40:01.013294 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5klnp"] Feb 19 19:40:01 crc kubenswrapper[4787]: I0219 19:40:01.023804 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5klnp"] Feb 19 19:40:01 crc kubenswrapper[4787]: I0219 19:40:01.834893 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 19:40:01 crc kubenswrapper[4787]: I0219 19:40:01.835162 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 19:40:01 crc kubenswrapper[4787]: I0219 19:40:01.919386 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 19:40:02 crc kubenswrapper[4787]: I0219 19:40:02.066233 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 19:40:02 crc kubenswrapper[4787]: I0219 19:40:02.733924 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 19:40:02 crc kubenswrapper[4787]: I0219 19:40:02.801552 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 19:40:02 crc kubenswrapper[4787]: I0219 19:40:02.911313 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" path="/var/lib/kubelet/pods/e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e/volumes" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.264842 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.392468 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5z6xh"] Feb 19 19:40:03 crc kubenswrapper[4787]: E0219 19:40:03.392912 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerName="dnsmasq-dns" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.392931 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerName="dnsmasq-dns" Feb 19 19:40:03 crc kubenswrapper[4787]: E0219 19:40:03.392958 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerName="init" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.392965 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerName="init" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.393163 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6020ed3-c1f5-4a01-bbda-e4efc0a1c76e" containerName="dnsmasq-dns" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.393899 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.400696 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0257-account-create-update-m9bdf"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.402315 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.407738 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.418469 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5z6xh"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.432732 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0257-account-create-update-m9bdf"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.483339 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrnd\" (UniqueName: \"kubernetes.io/projected/a80f1688-cd91-4c7e-a26a-20763f129b82-kube-api-access-ghrnd\") pod \"keystone-0257-account-create-update-m9bdf\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.483385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80f1688-cd91-4c7e-a26a-20763f129b82-operator-scripts\") pod \"keystone-0257-account-create-update-m9bdf\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.483443 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d74f59-7434-4c00-8097-59f873601963-operator-scripts\") pod \"keystone-db-create-5z6xh\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.483483 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsj2k\" (UniqueName: \"kubernetes.io/projected/c3d74f59-7434-4c00-8097-59f873601963-kube-api-access-vsj2k\") pod \"keystone-db-create-5z6xh\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.538017 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v6kj2"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.539692 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.566862 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v6kj2"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.585347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrnd\" (UniqueName: \"kubernetes.io/projected/a80f1688-cd91-4c7e-a26a-20763f129b82-kube-api-access-ghrnd\") pod \"keystone-0257-account-create-update-m9bdf\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.585393 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80f1688-cd91-4c7e-a26a-20763f129b82-operator-scripts\") pod \"keystone-0257-account-create-update-m9bdf\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.585437 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d74f59-7434-4c00-8097-59f873601963-operator-scripts\") pod \"keystone-db-create-5z6xh\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.585486 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsj2k\" (UniqueName: \"kubernetes.io/projected/c3d74f59-7434-4c00-8097-59f873601963-kube-api-access-vsj2k\") pod \"keystone-db-create-5z6xh\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.587146 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80f1688-cd91-4c7e-a26a-20763f129b82-operator-scripts\") pod \"keystone-0257-account-create-update-m9bdf\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.587174 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d74f59-7434-4c00-8097-59f873601963-operator-scripts\") pod \"keystone-db-create-5z6xh\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.605913 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrnd\" (UniqueName: \"kubernetes.io/projected/a80f1688-cd91-4c7e-a26a-20763f129b82-kube-api-access-ghrnd\") pod \"keystone-0257-account-create-update-m9bdf\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.614887 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsj2k\" (UniqueName: \"kubernetes.io/projected/c3d74f59-7434-4c00-8097-59f873601963-kube-api-access-vsj2k\") pod \"keystone-db-create-5z6xh\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.674738 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ec9b-account-create-update-x8rkj"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.676082 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.679845 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.683326 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec9b-account-create-update-x8rkj"] Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.689401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdgb4\" (UniqueName: \"kubernetes.io/projected/9002f19b-58a4-49df-9f61-945f8bca211e-kube-api-access-bdgb4\") pod \"placement-db-create-v6kj2\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.689532 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9002f19b-58a4-49df-9f61-945f8bca211e-operator-scripts\") pod \"placement-db-create-v6kj2\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.733219 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.741647 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.792073 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9002f19b-58a4-49df-9f61-945f8bca211e-operator-scripts\") pod \"placement-db-create-v6kj2\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.792278 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdgb4\" (UniqueName: \"kubernetes.io/projected/9002f19b-58a4-49df-9f61-945f8bca211e-kube-api-access-bdgb4\") pod \"placement-db-create-v6kj2\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.792329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2x6x\" (UniqueName: \"kubernetes.io/projected/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-kube-api-access-v2x6x\") pod \"placement-ec9b-account-create-update-x8rkj\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.792377 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-operator-scripts\") pod \"placement-ec9b-account-create-update-x8rkj\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.793011 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9002f19b-58a4-49df-9f61-945f8bca211e-operator-scripts\") pod \"placement-db-create-v6kj2\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.817796 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdgb4\" (UniqueName: \"kubernetes.io/projected/9002f19b-58a4-49df-9f61-945f8bca211e-kube-api-access-bdgb4\") pod \"placement-db-create-v6kj2\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.861170 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.894214 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-operator-scripts\") pod \"placement-ec9b-account-create-update-x8rkj\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.894447 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2x6x\" (UniqueName: \"kubernetes.io/projected/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-kube-api-access-v2x6x\") pod \"placement-ec9b-account-create-update-x8rkj\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.895228 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-operator-scripts\") pod \"placement-ec9b-account-create-update-x8rkj\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:03 crc kubenswrapper[4787]: I0219 19:40:03.916423 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2x6x\" (UniqueName: \"kubernetes.io/projected/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-kube-api-access-v2x6x\") pod \"placement-ec9b-account-create-update-x8rkj\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.018991 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.032771 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerStarted","Data":"1e53372b9cadb5188d3d20605334d615602e746930def99032ef4799572022e6"} Feb 19 19:40:04 crc kubenswrapper[4787]: W0219 19:40:04.236775 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3d74f59_7434_4c00_8097_59f873601963.slice/crio-aa84dc917629adbca33668f81e1c3a6466cc9ea333070d7954168e152a8915a1 WatchSource:0}: Error finding container aa84dc917629adbca33668f81e1c3a6466cc9ea333070d7954168e152a8915a1: Status 404 returned error can't find the container with id aa84dc917629adbca33668f81e1c3a6466cc9ea333070d7954168e152a8915a1 Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.239375 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5z6xh"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.357062 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0257-account-create-update-m9bdf"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.471682 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7jdz5"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.473870 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.483749 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7jdz5"] Feb 19 19:40:04 crc kubenswrapper[4787]: W0219 19:40:04.542373 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9002f19b_58a4_49df_9f61_945f8bca211e.slice/crio-867be34c8759a53b1ee2f50558903b0f61b271f77d113f69988946ad34110c8e WatchSource:0}: Error finding container 867be34c8759a53b1ee2f50558903b0f61b271f77d113f69988946ad34110c8e: Status 404 returned error can't find the container with id 867be34c8759a53b1ee2f50558903b0f61b271f77d113f69988946ad34110c8e Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.627213 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v6kj2"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.664150 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98dd860c-7fed-409a-92ca-374370b9e80f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7jdz5\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.664255 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmp7f\" (UniqueName: \"kubernetes.io/projected/98dd860c-7fed-409a-92ca-374370b9e80f-kube-api-access-lmp7f\") pod \"mysqld-exporter-openstack-db-create-7jdz5\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.716313 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec9b-account-create-update-x8rkj"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.736538 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-mnh6r"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.739547 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.768250 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6fh9\" (UniqueName: \"kubernetes.io/projected/0fa638ca-98b0-492d-aca0-05e57d565eb0-kube-api-access-d6fh9\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.768571 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-config\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.768834 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.769058 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98dd860c-7fed-409a-92ca-374370b9e80f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7jdz5\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.769157 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmp7f\" (UniqueName: \"kubernetes.io/projected/98dd860c-7fed-409a-92ca-374370b9e80f-kube-api-access-lmp7f\") pod \"mysqld-exporter-openstack-db-create-7jdz5\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.769193 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-dns-svc\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.769283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.770025 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98dd860c-7fed-409a-92ca-374370b9e80f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7jdz5\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.840207 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmp7f\" (UniqueName: \"kubernetes.io/projected/98dd860c-7fed-409a-92ca-374370b9e80f-kube-api-access-lmp7f\") pod \"mysqld-exporter-openstack-db-create-7jdz5\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.841870 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mnh6r"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.871652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-dns-svc\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.871732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.871822 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6fh9\" (UniqueName: \"kubernetes.io/projected/0fa638ca-98b0-492d-aca0-05e57d565eb0-kube-api-access-d6fh9\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.871848 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-config\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.871916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.876179 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.884618 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-config\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.885265 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-dns-svc\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.885450 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.955330 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6fh9\" (UniqueName: \"kubernetes.io/projected/0fa638ca-98b0-492d-aca0-05e57d565eb0-kube-api-access-d6fh9\") pod \"dnsmasq-dns-698758b865-mnh6r\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.955523 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0b91-account-create-update-npwhw"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.969287 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.971108 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0b91-account-create-update-npwhw"] Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.979488 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c20b79d-45cd-476a-9309-d7850a869dd8-operator-scripts\") pod \"mysqld-exporter-0b91-account-create-update-npwhw\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.979845 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktv6w\" (UniqueName: \"kubernetes.io/projected/7c20b79d-45cd-476a-9309-d7850a869dd8-kube-api-access-ktv6w\") pod \"mysqld-exporter-0b91-account-create-update-npwhw\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:04 crc kubenswrapper[4787]: I0219 19:40:04.980935 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.073849 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v6kj2" event={"ID":"9002f19b-58a4-49df-9f61-945f8bca211e","Type":"ContainerStarted","Data":"867be34c8759a53b1ee2f50558903b0f61b271f77d113f69988946ad34110c8e"} Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.085000 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c20b79d-45cd-476a-9309-d7850a869dd8-operator-scripts\") pod \"mysqld-exporter-0b91-account-create-update-npwhw\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.085171 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktv6w\" (UniqueName: \"kubernetes.io/projected/7c20b79d-45cd-476a-9309-d7850a869dd8-kube-api-access-ktv6w\") pod \"mysqld-exporter-0b91-account-create-update-npwhw\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.086253 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c20b79d-45cd-476a-9309-d7850a869dd8-operator-scripts\") pod \"mysqld-exporter-0b91-account-create-update-npwhw\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.122412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktv6w\" (UniqueName: \"kubernetes.io/projected/7c20b79d-45cd-476a-9309-d7850a869dd8-kube-api-access-ktv6w\") pod \"mysqld-exporter-0b91-account-create-update-npwhw\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.126082 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.145867 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5z6xh" event={"ID":"c3d74f59-7434-4c00-8097-59f873601963","Type":"ContainerStarted","Data":"42426968039e8481053b059b66d234c35c11587ffb047931659dcb74db84ee8b"} Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.145913 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5z6xh" event={"ID":"c3d74f59-7434-4c00-8097-59f873601963","Type":"ContainerStarted","Data":"aa84dc917629adbca33668f81e1c3a6466cc9ea333070d7954168e152a8915a1"} Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.164237 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0257-account-create-update-m9bdf" event={"ID":"a80f1688-cd91-4c7e-a26a-20763f129b82","Type":"ContainerStarted","Data":"e19a94b75d579801708f035b0285721bc9accd3fcf2df45b04ef7f866a52cfd0"} Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.167541 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec9b-account-create-update-x8rkj" event={"ID":"6fe5aa4b-4d38-4918-adc4-b507bb6f1317","Type":"ContainerStarted","Data":"2ef5515dc30c1e5dfba0cead35998b1ac8afaafe38795908aecb331564c50b6a"} Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.236692 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.406775 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.760702 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7jdz5"] Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.876032 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mnh6r"] Feb 19 19:40:05 crc kubenswrapper[4787]: W0219 19:40:05.890924 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa638ca_98b0_492d_aca0_05e57d565eb0.slice/crio-2e1ee4b3b14de3202075d02625aaa9cecdebcb321fa7c75ce759f7836bc1705b WatchSource:0}: Error finding container 2e1ee4b3b14de3202075d02625aaa9cecdebcb321fa7c75ce759f7836bc1705b: Status 404 returned error can't find the container with id 2e1ee4b3b14de3202075d02625aaa9cecdebcb321fa7c75ce759f7836bc1705b Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.951517 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.962822 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.968579 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.968797 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-cj4qj" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.968932 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.969035 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 19:40:05 crc kubenswrapper[4787]: I0219 19:40:05.991083 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.009158 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-cache\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.009284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.009363 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.009390 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.009433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-lock\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.009472 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765ts\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-kube-api-access-765ts\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: W0219 19:40:06.031596 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c20b79d_45cd_476a_9309_d7850a869dd8.slice/crio-01f74f122795451a0349d1dbe9cba049903e5c215d73a211425baedc2591dad6 WatchSource:0}: Error finding container 01f74f122795451a0349d1dbe9cba049903e5c215d73a211425baedc2591dad6: Status 404 returned error can't find the container with id 01f74f122795451a0349d1dbe9cba049903e5c215d73a211425baedc2591dad6 Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.035111 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0b91-account-create-update-npwhw"] Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.111924 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-cache\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.112396 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-cache\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.114120 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.114195 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.114229 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.114279 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-lock\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.114325 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-765ts\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-kube-api-access-765ts\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: E0219 19:40:06.114549 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 19:40:06 crc kubenswrapper[4787]: E0219 19:40:06.114578 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 19:40:06 crc kubenswrapper[4787]: E0219 19:40:06.114650 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift podName:3109e7cb-bd74-40d5-a2ab-deb7a9794d44 nodeName:}" failed. No retries permitted until 2026-02-19 19:40:06.614624921 +0000 UTC m=+1274.405290933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift") pod "swift-storage-0" (UID: "3109e7cb-bd74-40d5-a2ab-deb7a9794d44") : configmap "swift-ring-files" not found Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.115906 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-lock\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.120156 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.120261 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ec1a95660f2481fd9c5cdf928a9647168118212d779d442f194ca83c697d5a62/globalmount\"" pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.123435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.136500 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-765ts\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-kube-api-access-765ts\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.173134 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e1a7026c-6c73-47ec-8f0a-2317030871bf\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.180148 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" event={"ID":"98dd860c-7fed-409a-92ca-374370b9e80f","Type":"ContainerStarted","Data":"440487074da99b6ff39faf06d17865b92d8ba4c2c85a50e98c86a28c02d8db6d"} Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.182184 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mnh6r" event={"ID":"0fa638ca-98b0-492d-aca0-05e57d565eb0","Type":"ContainerStarted","Data":"2e1ee4b3b14de3202075d02625aaa9cecdebcb321fa7c75ce759f7836bc1705b"} Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.183430 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" event={"ID":"7c20b79d-45cd-476a-9309-d7850a869dd8","Type":"ContainerStarted","Data":"01f74f122795451a0349d1dbe9cba049903e5c215d73a211425baedc2591dad6"} Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.595771 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bk7pj"] Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.597428 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.600382 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.600681 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.601023 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.611955 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bk7pj"] Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.625457 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:06 crc kubenswrapper[4787]: E0219 19:40:06.626257 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 19:40:06 crc kubenswrapper[4787]: E0219 19:40:06.626283 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 19:40:06 crc kubenswrapper[4787]: E0219 19:40:06.626338 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift podName:3109e7cb-bd74-40d5-a2ab-deb7a9794d44 nodeName:}" failed. No retries permitted until 2026-02-19 19:40:07.626320761 +0000 UTC m=+1275.416986703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift") pod "swift-storage-0" (UID: "3109e7cb-bd74-40d5-a2ab-deb7a9794d44") : configmap "swift-ring-files" not found Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728099 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgzz\" (UniqueName: \"kubernetes.io/projected/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-kube-api-access-6wgzz\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728220 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-scripts\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728253 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-combined-ca-bundle\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728279 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-ring-data-devices\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728324 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-etc-swift\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728549 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-swiftconf\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.728645 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-dispersionconf\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830167 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-swiftconf\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830212 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-dispersionconf\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830256 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgzz\" (UniqueName: \"kubernetes.io/projected/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-kube-api-access-6wgzz\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830304 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-scripts\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830330 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-combined-ca-bundle\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830380 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-ring-data-devices\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830408 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-etc-swift\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.830827 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-etc-swift\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.831990 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-scripts\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.834090 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-swiftconf\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.835343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-ring-data-devices\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.842284 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-combined-ca-bundle\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.851019 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-dispersionconf\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.852218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgzz\" (UniqueName: \"kubernetes.io/projected/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-kube-api-access-6wgzz\") pod \"swift-ring-rebalance-bk7pj\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:06 crc kubenswrapper[4787]: I0219 19:40:06.979175 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.228647 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-n4sbd"] Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.230455 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.246086 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-n4sbd"] Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.251381 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdnk\" (UniqueName: \"kubernetes.io/projected/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-kube-api-access-jfdnk\") pod \"glance-db-create-n4sbd\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.251477 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-operator-scripts\") pod \"glance-db-create-n4sbd\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.333473 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-241f-account-create-update-bphqx"] Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.337623 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.339595 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.350027 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-241f-account-create-update-bphqx"] Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.354386 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbmvt\" (UniqueName: \"kubernetes.io/projected/c490b914-c022-49de-a191-891ca4991459-kube-api-access-kbmvt\") pod \"glance-241f-account-create-update-bphqx\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.354513 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-operator-scripts\") pod \"glance-db-create-n4sbd\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.354970 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c490b914-c022-49de-a191-891ca4991459-operator-scripts\") pod \"glance-241f-account-create-update-bphqx\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.355081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdnk\" (UniqueName: \"kubernetes.io/projected/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-kube-api-access-jfdnk\") pod \"glance-db-create-n4sbd\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.355309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-operator-scripts\") pod \"glance-db-create-n4sbd\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.376820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdnk\" (UniqueName: \"kubernetes.io/projected/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-kube-api-access-jfdnk\") pod \"glance-db-create-n4sbd\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.457031 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c490b914-c022-49de-a191-891ca4991459-operator-scripts\") pod \"glance-241f-account-create-update-bphqx\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.457115 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbmvt\" (UniqueName: \"kubernetes.io/projected/c490b914-c022-49de-a191-891ca4991459-kube-api-access-kbmvt\") pod \"glance-241f-account-create-update-bphqx\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.457862 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c490b914-c022-49de-a191-891ca4991459-operator-scripts\") pod \"glance-241f-account-create-update-bphqx\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.475423 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbmvt\" (UniqueName: \"kubernetes.io/projected/c490b914-c022-49de-a191-891ca4991459-kube-api-access-kbmvt\") pod \"glance-241f-account-create-update-bphqx\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.498166 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bk7pj"] Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.554414 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.660527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:07 crc kubenswrapper[4787]: E0219 19:40:07.661846 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 19:40:07 crc kubenswrapper[4787]: E0219 19:40:07.661981 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 19:40:07 crc kubenswrapper[4787]: E0219 19:40:07.662154 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift podName:3109e7cb-bd74-40d5-a2ab-deb7a9794d44 nodeName:}" failed. No retries permitted until 2026-02-19 19:40:09.662019924 +0000 UTC m=+1277.452685866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift") pod "swift-storage-0" (UID: "3109e7cb-bd74-40d5-a2ab-deb7a9794d44") : configmap "swift-ring-files" not found Feb 19 19:40:07 crc kubenswrapper[4787]: I0219 19:40:07.663892 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:08 crc kubenswrapper[4787]: W0219 19:40:08.166368 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc490b914_c022_49de_a191_891ca4991459.slice/crio-d0c13dbe533d3ba69af4963548e4173ee8bcd08781145ebd9ef4ecc99d01d0ae WatchSource:0}: Error finding container d0c13dbe533d3ba69af4963548e4173ee8bcd08781145ebd9ef4ecc99d01d0ae: Status 404 returned error can't find the container with id d0c13dbe533d3ba69af4963548e4173ee8bcd08781145ebd9ef4ecc99d01d0ae Feb 19 19:40:08 crc kubenswrapper[4787]: I0219 19:40:08.169843 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-241f-account-create-update-bphqx"] Feb 19 19:40:08 crc kubenswrapper[4787]: I0219 19:40:08.204050 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-241f-account-create-update-bphqx" event={"ID":"c490b914-c022-49de-a191-891ca4991459","Type":"ContainerStarted","Data":"d0c13dbe533d3ba69af4963548e4173ee8bcd08781145ebd9ef4ecc99d01d0ae"} Feb 19 19:40:08 crc kubenswrapper[4787]: I0219 19:40:08.209800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0257-account-create-update-m9bdf" event={"ID":"a80f1688-cd91-4c7e-a26a-20763f129b82","Type":"ContainerStarted","Data":"8e9604f51670be8682449de31c23da3c385366cb8bbe204fd72dd368c297d4cc"} Feb 19 19:40:08 crc kubenswrapper[4787]: I0219 19:40:08.211756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bk7pj" event={"ID":"f48cb9d5-9e69-4553-b61e-e0bde367ffc7","Type":"ContainerStarted","Data":"07cc8955e34994dcf87a2d76cfdbab9170690a7f83e0aa4c6f36d8895d5b8e51"} Feb 19 19:40:08 crc kubenswrapper[4787]: I0219 19:40:08.247137 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5z6xh" podStartSLOduration=5.247115878 podStartE2EDuration="5.247115878s" podCreationTimestamp="2026-02-19 19:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:08.23815757 +0000 UTC m=+1276.028823512" watchObservedRunningTime="2026-02-19 19:40:08.247115878 +0000 UTC m=+1276.037781820" Feb 19 19:40:08 crc kubenswrapper[4787]: W0219 19:40:08.254824 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7643078_1dbe_4a7e_9dee_7a4886d87d5e.slice/crio-a8730813ebeb7293ee9e310740cfc0a8fa683db099fa71fe15dd5c6fdad7b0cf WatchSource:0}: Error finding container a8730813ebeb7293ee9e310740cfc0a8fa683db099fa71fe15dd5c6fdad7b0cf: Status 404 returned error can't find the container with id a8730813ebeb7293ee9e310740cfc0a8fa683db099fa71fe15dd5c6fdad7b0cf Feb 19 19:40:08 crc kubenswrapper[4787]: I0219 19:40:08.260205 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-n4sbd"] Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.028111 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9l5jm"] Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.030192 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.032594 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.043498 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9l5jm"] Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.093156 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh69z\" (UniqueName: \"kubernetes.io/projected/d1eab9cd-4b1b-42e8-973b-b57122f8b293-kube-api-access-qh69z\") pod \"root-account-create-update-9l5jm\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.093316 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1eab9cd-4b1b-42e8-973b-b57122f8b293-operator-scripts\") pod \"root-account-create-update-9l5jm\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.194456 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1eab9cd-4b1b-42e8-973b-b57122f8b293-operator-scripts\") pod \"root-account-create-update-9l5jm\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.194653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh69z\" (UniqueName: \"kubernetes.io/projected/d1eab9cd-4b1b-42e8-973b-b57122f8b293-kube-api-access-qh69z\") pod \"root-account-create-update-9l5jm\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.195933 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1eab9cd-4b1b-42e8-973b-b57122f8b293-operator-scripts\") pod \"root-account-create-update-9l5jm\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.217870 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh69z\" (UniqueName: \"kubernetes.io/projected/d1eab9cd-4b1b-42e8-973b-b57122f8b293-kube-api-access-qh69z\") pod \"root-account-create-update-9l5jm\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.253129 4787 generic.go:334] "Generic (PLEG): container finished" podID="a80f1688-cd91-4c7e-a26a-20763f129b82" containerID="8e9604f51670be8682449de31c23da3c385366cb8bbe204fd72dd368c297d4cc" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.253169 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0257-account-create-update-m9bdf" event={"ID":"a80f1688-cd91-4c7e-a26a-20763f129b82","Type":"ContainerDied","Data":"8e9604f51670be8682449de31c23da3c385366cb8bbe204fd72dd368c297d4cc"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.257524 4787 generic.go:334] "Generic (PLEG): container finished" podID="f7643078-1dbe-4a7e-9dee-7a4886d87d5e" containerID="b85ce4c95590a9895fe602d29a9caed333ac4eb3d6ac07a6f92b1cd7f6bf1897" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.257696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-n4sbd" event={"ID":"f7643078-1dbe-4a7e-9dee-7a4886d87d5e","Type":"ContainerDied","Data":"b85ce4c95590a9895fe602d29a9caed333ac4eb3d6ac07a6f92b1cd7f6bf1897"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.257720 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-n4sbd" event={"ID":"f7643078-1dbe-4a7e-9dee-7a4886d87d5e","Type":"ContainerStarted","Data":"a8730813ebeb7293ee9e310740cfc0a8fa683db099fa71fe15dd5c6fdad7b0cf"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.259210 4787 generic.go:334] "Generic (PLEG): container finished" podID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerID="d003e35a67cc1d25a900af4e31ed688bca71144d5f0f6d4e1fecc509f1f991d1" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.259233 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mnh6r" event={"ID":"0fa638ca-98b0-492d-aca0-05e57d565eb0","Type":"ContainerDied","Data":"d003e35a67cc1d25a900af4e31ed688bca71144d5f0f6d4e1fecc509f1f991d1"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.262938 4787 generic.go:334] "Generic (PLEG): container finished" podID="c490b914-c022-49de-a191-891ca4991459" containerID="12478eb7e60d554dbb626e89b4b83cc71c4eb55bf543241a2fde7f633eeb7755" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.262989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-241f-account-create-update-bphqx" event={"ID":"c490b914-c022-49de-a191-891ca4991459","Type":"ContainerDied","Data":"12478eb7e60d554dbb626e89b4b83cc71c4eb55bf543241a2fde7f633eeb7755"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.263066 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.263363 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.282526 4787 generic.go:334] "Generic (PLEG): container finished" podID="c3d74f59-7434-4c00-8097-59f873601963" containerID="42426968039e8481053b059b66d234c35c11587ffb047931659dcb74db84ee8b" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.282585 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5z6xh" event={"ID":"c3d74f59-7434-4c00-8097-59f873601963","Type":"ContainerDied","Data":"42426968039e8481053b059b66d234c35c11587ffb047931659dcb74db84ee8b"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.291851 4787 generic.go:334] "Generic (PLEG): container finished" podID="7c20b79d-45cd-476a-9309-d7850a869dd8" containerID="6b5adf351050950dca29614aae3d2c86313c170d249e35e9e44a92515111eabb" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.291970 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" event={"ID":"7c20b79d-45cd-476a-9309-d7850a869dd8","Type":"ContainerDied","Data":"6b5adf351050950dca29614aae3d2c86313c170d249e35e9e44a92515111eabb"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.295446 4787 generic.go:334] "Generic (PLEG): container finished" podID="6fe5aa4b-4d38-4918-adc4-b507bb6f1317" containerID="7313a99dbe4e49fced36e6c34ec33b98c51b21c36aeca38beaf09aa6c20a1859" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.295523 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec9b-account-create-update-x8rkj" event={"ID":"6fe5aa4b-4d38-4918-adc4-b507bb6f1317","Type":"ContainerDied","Data":"7313a99dbe4e49fced36e6c34ec33b98c51b21c36aeca38beaf09aa6c20a1859"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.297442 4787 generic.go:334] "Generic (PLEG): container finished" podID="98dd860c-7fed-409a-92ca-374370b9e80f" containerID="5688eafa5b22d308de45b985431d8601c72dc3bd89291e3abe7e3fceb6601dd2" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.297514 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" event={"ID":"98dd860c-7fed-409a-92ca-374370b9e80f","Type":"ContainerDied","Data":"5688eafa5b22d308de45b985431d8601c72dc3bd89291e3abe7e3fceb6601dd2"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.299441 4787 generic.go:334] "Generic (PLEG): container finished" podID="9002f19b-58a4-49df-9f61-945f8bca211e" containerID="c6ff3aa2a11131309e7d1f067abb69465ff674a9b54695cc115ddb857042c6e1" exitCode=0 Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.299475 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v6kj2" event={"ID":"9002f19b-58a4-49df-9f61-945f8bca211e","Type":"ContainerDied","Data":"c6ff3aa2a11131309e7d1f067abb69465ff674a9b54695cc115ddb857042c6e1"} Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.429792 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:09 crc kubenswrapper[4787]: I0219 19:40:09.708172 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:09 crc kubenswrapper[4787]: E0219 19:40:09.708432 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 19:40:09 crc kubenswrapper[4787]: E0219 19:40:09.708449 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 19:40:09 crc kubenswrapper[4787]: E0219 19:40:09.708499 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift podName:3109e7cb-bd74-40d5-a2ab-deb7a9794d44 nodeName:}" failed. No retries permitted until 2026-02-19 19:40:13.708481412 +0000 UTC m=+1281.499147354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift") pod "swift-storage-0" (UID: "3109e7cb-bd74-40d5-a2ab-deb7a9794d44") : configmap "swift-ring-files" not found Feb 19 19:40:11 crc kubenswrapper[4787]: I0219 19:40:11.875473 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-bd66c8fd6-b6vcd" podUID="59942447-448c-4d2e-b1ac-fe695185fc0e" containerName="console" containerID="cri-o://b5123fedff0152a767d5feeda526dcd30d31bcb07ccdf31903adad1ab4b38374" gracePeriod=15 Feb 19 19:40:12 crc kubenswrapper[4787]: I0219 19:40:12.338026 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd66c8fd6-b6vcd_59942447-448c-4d2e-b1ac-fe695185fc0e/console/0.log" Feb 19 19:40:12 crc kubenswrapper[4787]: I0219 19:40:12.338082 4787 generic.go:334] "Generic (PLEG): container finished" podID="59942447-448c-4d2e-b1ac-fe695185fc0e" containerID="b5123fedff0152a767d5feeda526dcd30d31bcb07ccdf31903adad1ab4b38374" exitCode=2 Feb 19 19:40:12 crc kubenswrapper[4787]: I0219 19:40:12.338117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd66c8fd6-b6vcd" event={"ID":"59942447-448c-4d2e-b1ac-fe695185fc0e","Type":"ContainerDied","Data":"b5123fedff0152a767d5feeda526dcd30d31bcb07ccdf31903adad1ab4b38374"} Feb 19 19:40:12 crc kubenswrapper[4787]: I0219 19:40:12.443738 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.137787 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.142906 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.147934 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.162264 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.267622 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.314961 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2x6x\" (UniqueName: \"kubernetes.io/projected/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-kube-api-access-v2x6x\") pod \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315014 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9002f19b-58a4-49df-9f61-945f8bca211e-operator-scripts\") pod \"9002f19b-58a4-49df-9f61-945f8bca211e\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315036 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdgb4\" (UniqueName: \"kubernetes.io/projected/9002f19b-58a4-49df-9f61-945f8bca211e-kube-api-access-bdgb4\") pod \"9002f19b-58a4-49df-9f61-945f8bca211e\" (UID: \"9002f19b-58a4-49df-9f61-945f8bca211e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315056 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98dd860c-7fed-409a-92ca-374370b9e80f-operator-scripts\") pod \"98dd860c-7fed-409a-92ca-374370b9e80f\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315103 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktv6w\" (UniqueName: \"kubernetes.io/projected/7c20b79d-45cd-476a-9309-d7850a869dd8-kube-api-access-ktv6w\") pod \"7c20b79d-45cd-476a-9309-d7850a869dd8\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315128 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbmvt\" (UniqueName: \"kubernetes.io/projected/c490b914-c022-49de-a191-891ca4991459-kube-api-access-kbmvt\") pod \"c490b914-c022-49de-a191-891ca4991459\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315157 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c20b79d-45cd-476a-9309-d7850a869dd8-operator-scripts\") pod \"7c20b79d-45cd-476a-9309-d7850a869dd8\" (UID: \"7c20b79d-45cd-476a-9309-d7850a869dd8\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315174 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmp7f\" (UniqueName: \"kubernetes.io/projected/98dd860c-7fed-409a-92ca-374370b9e80f-kube-api-access-lmp7f\") pod \"98dd860c-7fed-409a-92ca-374370b9e80f\" (UID: \"98dd860c-7fed-409a-92ca-374370b9e80f\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315213 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-operator-scripts\") pod \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\" (UID: \"6fe5aa4b-4d38-4918-adc4-b507bb6f1317\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315226 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c490b914-c022-49de-a191-891ca4991459-operator-scripts\") pod \"c490b914-c022-49de-a191-891ca4991459\" (UID: \"c490b914-c022-49de-a191-891ca4991459\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.315669 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.316019 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9002f19b-58a4-49df-9f61-945f8bca211e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9002f19b-58a4-49df-9f61-945f8bca211e" (UID: "9002f19b-58a4-49df-9f61-945f8bca211e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.321078 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b914-c022-49de-a191-891ca4991459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c490b914-c022-49de-a191-891ca4991459" (UID: "c490b914-c022-49de-a191-891ca4991459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.322851 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98dd860c-7fed-409a-92ca-374370b9e80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98dd860c-7fed-409a-92ca-374370b9e80f" (UID: "98dd860c-7fed-409a-92ca-374370b9e80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.324774 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fe5aa4b-4d38-4918-adc4-b507bb6f1317" (UID: "6fe5aa4b-4d38-4918-adc4-b507bb6f1317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.325888 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dd860c-7fed-409a-92ca-374370b9e80f-kube-api-access-lmp7f" (OuterVolumeSpecName: "kube-api-access-lmp7f") pod "98dd860c-7fed-409a-92ca-374370b9e80f" (UID: "98dd860c-7fed-409a-92ca-374370b9e80f"). InnerVolumeSpecName "kube-api-access-lmp7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.326755 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c20b79d-45cd-476a-9309-d7850a869dd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c20b79d-45cd-476a-9309-d7850a869dd8" (UID: "7c20b79d-45cd-476a-9309-d7850a869dd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.356910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c20b79d-45cd-476a-9309-d7850a869dd8-kube-api-access-ktv6w" (OuterVolumeSpecName: "kube-api-access-ktv6w") pod "7c20b79d-45cd-476a-9309-d7850a869dd8" (UID: "7c20b79d-45cd-476a-9309-d7850a869dd8"). InnerVolumeSpecName "kube-api-access-ktv6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.361395 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-kube-api-access-v2x6x" (OuterVolumeSpecName: "kube-api-access-v2x6x") pod "6fe5aa4b-4d38-4918-adc4-b507bb6f1317" (UID: "6fe5aa4b-4d38-4918-adc4-b507bb6f1317"). InnerVolumeSpecName "kube-api-access-v2x6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.370905 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9002f19b-58a4-49df-9f61-945f8bca211e-kube-api-access-bdgb4" (OuterVolumeSpecName: "kube-api-access-bdgb4") pod "9002f19b-58a4-49df-9f61-945f8bca211e" (UID: "9002f19b-58a4-49df-9f61-945f8bca211e"). InnerVolumeSpecName "kube-api-access-bdgb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.371031 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c490b914-c022-49de-a191-891ca4991459-kube-api-access-kbmvt" (OuterVolumeSpecName: "kube-api-access-kbmvt") pod "c490b914-c022-49de-a191-891ca4991459" (UID: "c490b914-c022-49de-a191-891ca4991459"). InnerVolumeSpecName "kube-api-access-kbmvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420850 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbmvt\" (UniqueName: \"kubernetes.io/projected/c490b914-c022-49de-a191-891ca4991459-kube-api-access-kbmvt\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420890 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c20b79d-45cd-476a-9309-d7850a869dd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420903 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmp7f\" (UniqueName: \"kubernetes.io/projected/98dd860c-7fed-409a-92ca-374370b9e80f-kube-api-access-lmp7f\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420914 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420926 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c490b914-c022-49de-a191-891ca4991459-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420937 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2x6x\" (UniqueName: \"kubernetes.io/projected/6fe5aa4b-4d38-4918-adc4-b507bb6f1317-kube-api-access-v2x6x\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420948 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9002f19b-58a4-49df-9f61-945f8bca211e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420958 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdgb4\" (UniqueName: \"kubernetes.io/projected/9002f19b-58a4-49df-9f61-945f8bca211e-kube-api-access-bdgb4\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420970 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98dd860c-7fed-409a-92ca-374370b9e80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.420980 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktv6w\" (UniqueName: \"kubernetes.io/projected/7c20b79d-45cd-476a-9309-d7850a869dd8-kube-api-access-ktv6w\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.470636 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0257-account-create-update-m9bdf" event={"ID":"a80f1688-cd91-4c7e-a26a-20763f129b82","Type":"ContainerDied","Data":"e19a94b75d579801708f035b0285721bc9accd3fcf2df45b04ef7f866a52cfd0"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.470678 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e19a94b75d579801708f035b0285721bc9accd3fcf2df45b04ef7f866a52cfd0" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.470840 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0257-account-create-update-m9bdf" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.475065 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.480463 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec9b-account-create-update-x8rkj" event={"ID":"6fe5aa4b-4d38-4918-adc4-b507bb6f1317","Type":"ContainerDied","Data":"2ef5515dc30c1e5dfba0cead35998b1ac8afaafe38795908aecb331564c50b6a"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.480509 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef5515dc30c1e5dfba0cead35998b1ac8afaafe38795908aecb331564c50b6a" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.480663 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec9b-account-create-update-x8rkj" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.482058 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.498019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" event={"ID":"98dd860c-7fed-409a-92ca-374370b9e80f","Type":"ContainerDied","Data":"440487074da99b6ff39faf06d17865b92d8ba4c2c85a50e98c86a28c02d8db6d"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.498316 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440487074da99b6ff39faf06d17865b92d8ba4c2c85a50e98c86a28c02d8db6d" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.498045 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7jdz5" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.507813 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6kj2" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.508819 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v6kj2" event={"ID":"9002f19b-58a4-49df-9f61-945f8bca211e","Type":"ContainerDied","Data":"867be34c8759a53b1ee2f50558903b0f61b271f77d113f69988946ad34110c8e"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.508872 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867be34c8759a53b1ee2f50558903b0f61b271f77d113f69988946ad34110c8e" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.515640 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-n4sbd" event={"ID":"f7643078-1dbe-4a7e-9dee-7a4886d87d5e","Type":"ContainerDied","Data":"a8730813ebeb7293ee9e310740cfc0a8fa683db099fa71fe15dd5c6fdad7b0cf"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.515664 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-n4sbd" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.515680 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8730813ebeb7293ee9e310740cfc0a8fa683db099fa71fe15dd5c6fdad7b0cf" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.515788 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd66c8fd6-b6vcd_59942447-448c-4d2e-b1ac-fe695185fc0e/console/0.log" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.515866 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.521688 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-241f-account-create-update-bphqx" event={"ID":"c490b914-c022-49de-a191-891ca4991459","Type":"ContainerDied","Data":"d0c13dbe533d3ba69af4963548e4173ee8bcd08781145ebd9ef4ecc99d01d0ae"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.521730 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c13dbe533d3ba69af4963548e4173ee8bcd08781145ebd9ef4ecc99d01d0ae" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.521801 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-241f-account-create-update-bphqx" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.521868 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80f1688-cd91-4c7e-a26a-20763f129b82-operator-scripts\") pod \"a80f1688-cd91-4c7e-a26a-20763f129b82\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.521990 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghrnd\" (UniqueName: \"kubernetes.io/projected/a80f1688-cd91-4c7e-a26a-20763f129b82-kube-api-access-ghrnd\") pod \"a80f1688-cd91-4c7e-a26a-20763f129b82\" (UID: \"a80f1688-cd91-4c7e-a26a-20763f129b82\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.527179 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a80f1688-cd91-4c7e-a26a-20763f129b82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a80f1688-cd91-4c7e-a26a-20763f129b82" (UID: "a80f1688-cd91-4c7e-a26a-20763f129b82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.529397 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80f1688-cd91-4c7e-a26a-20763f129b82-kube-api-access-ghrnd" (OuterVolumeSpecName: "kube-api-access-ghrnd") pod "a80f1688-cd91-4c7e-a26a-20763f129b82" (UID: "a80f1688-cd91-4c7e-a26a-20763f129b82"). InnerVolumeSpecName "kube-api-access-ghrnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.532153 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5z6xh" event={"ID":"c3d74f59-7434-4c00-8097-59f873601963","Type":"ContainerDied","Data":"aa84dc917629adbca33668f81e1c3a6466cc9ea333070d7954168e152a8915a1"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.532185 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa84dc917629adbca33668f81e1c3a6466cc9ea333070d7954168e152a8915a1" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.532235 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5z6xh" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.547371 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" event={"ID":"7c20b79d-45cd-476a-9309-d7850a869dd8","Type":"ContainerDied","Data":"01f74f122795451a0349d1dbe9cba049903e5c215d73a211425baedc2591dad6"} Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.548204 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f74f122795451a0349d1dbe9cba049903e5c215d73a211425baedc2591dad6" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.547536 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0b91-account-create-update-npwhw" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624692 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsj2k\" (UniqueName: \"kubernetes.io/projected/c3d74f59-7434-4c00-8097-59f873601963-kube-api-access-vsj2k\") pod \"c3d74f59-7434-4c00-8097-59f873601963\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfdnk\" (UniqueName: \"kubernetes.io/projected/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-kube-api-access-jfdnk\") pod \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624832 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-operator-scripts\") pod \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\" (UID: \"f7643078-1dbe-4a7e-9dee-7a4886d87d5e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624917 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d74f59-7434-4c00-8097-59f873601963-operator-scripts\") pod \"c3d74f59-7434-4c00-8097-59f873601963\" (UID: \"c3d74f59-7434-4c00-8097-59f873601963\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624938 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-console-config\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624974 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-serving-cert\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.624996 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-oauth-config\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625020 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-trusted-ca-bundle\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625122 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5crb\" (UniqueName: \"kubernetes.io/projected/59942447-448c-4d2e-b1ac-fe695185fc0e-kube-api-access-b5crb\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625143 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-service-ca\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625161 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-oauth-serving-cert\") pod \"59942447-448c-4d2e-b1ac-fe695185fc0e\" (UID: \"59942447-448c-4d2e-b1ac-fe695185fc0e\") " Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625842 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80f1688-cd91-4c7e-a26a-20763f129b82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625857 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghrnd\" (UniqueName: \"kubernetes.io/projected/a80f1688-cd91-4c7e-a26a-20763f129b82-kube-api-access-ghrnd\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.626348 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7643078-1dbe-4a7e-9dee-7a4886d87d5e" (UID: "f7643078-1dbe-4a7e-9dee-7a4886d87d5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.625965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-console-config" (OuterVolumeSpecName: "console-config") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.626702 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3d74f59-7434-4c00-8097-59f873601963-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3d74f59-7434-4c00-8097-59f873601963" (UID: "c3d74f59-7434-4c00-8097-59f873601963"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.626953 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.627066 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-service-ca" (OuterVolumeSpecName: "service-ca") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.628029 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.631443 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-kube-api-access-jfdnk" (OuterVolumeSpecName: "kube-api-access-jfdnk") pod "f7643078-1dbe-4a7e-9dee-7a4886d87d5e" (UID: "f7643078-1dbe-4a7e-9dee-7a4886d87d5e"). InnerVolumeSpecName "kube-api-access-jfdnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.631968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.631990 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59942447-448c-4d2e-b1ac-fe695185fc0e-kube-api-access-b5crb" (OuterVolumeSpecName: "kube-api-access-b5crb") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "kube-api-access-b5crb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.632812 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d74f59-7434-4c00-8097-59f873601963-kube-api-access-vsj2k" (OuterVolumeSpecName: "kube-api-access-vsj2k") pod "c3d74f59-7434-4c00-8097-59f873601963" (UID: "c3d74f59-7434-4c00-8097-59f873601963"). InnerVolumeSpecName "kube-api-access-vsj2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.636511 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "59942447-448c-4d2e-b1ac-fe695185fc0e" (UID: "59942447-448c-4d2e-b1ac-fe695185fc0e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727534 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727735 4787 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727749 4787 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727757 4787 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59942447-448c-4d2e-b1ac-fe695185fc0e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: E0219 19:40:13.727755 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 19:40:13 crc kubenswrapper[4787]: E0219 19:40:13.727809 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 19:40:13 crc kubenswrapper[4787]: E0219 19:40:13.727862 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift podName:3109e7cb-bd74-40d5-a2ab-deb7a9794d44 nodeName:}" failed. No retries permitted until 2026-02-19 19:40:21.727844699 +0000 UTC m=+1289.518510641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift") pod "swift-storage-0" (UID: "3109e7cb-bd74-40d5-a2ab-deb7a9794d44") : configmap "swift-ring-files" not found Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727766 4787 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727914 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5crb\" (UniqueName: \"kubernetes.io/projected/59942447-448c-4d2e-b1ac-fe695185fc0e-kube-api-access-b5crb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727924 4787 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727932 4787 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59942447-448c-4d2e-b1ac-fe695185fc0e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727941 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsj2k\" (UniqueName: \"kubernetes.io/projected/c3d74f59-7434-4c00-8097-59f873601963-kube-api-access-vsj2k\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727951 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfdnk\" (UniqueName: \"kubernetes.io/projected/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-kube-api-access-jfdnk\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727963 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7643078-1dbe-4a7e-9dee-7a4886d87d5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.727972 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d74f59-7434-4c00-8097-59f873601963-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:13 crc kubenswrapper[4787]: I0219 19:40:13.794117 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9l5jm"] Feb 19 19:40:13 crc kubenswrapper[4787]: W0219 19:40:13.794960 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1eab9cd_4b1b_42e8_973b_b57122f8b293.slice/crio-1db7950e2dde19bbc4e6f6df5cde9a39381747e1bdb3e313fc24d34b53bb3a7d WatchSource:0}: Error finding container 1db7950e2dde19bbc4e6f6df5cde9a39381747e1bdb3e313fc24d34b53bb3a7d: Status 404 returned error can't find the container with id 1db7950e2dde19bbc4e6f6df5cde9a39381747e1bdb3e313fc24d34b53bb3a7d Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.572586 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mnh6r" event={"ID":"0fa638ca-98b0-492d-aca0-05e57d565eb0","Type":"ContainerStarted","Data":"de9aa46bcb0c6e2b809cc9f5fb6f13c6bb1dcf88d5fc3facfc9b39ebfbed40f0"} Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.574619 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.577101 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-bd66c8fd6-b6vcd_59942447-448c-4d2e-b1ac-fe695185fc0e/console/0.log" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.577167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bd66c8fd6-b6vcd" event={"ID":"59942447-448c-4d2e-b1ac-fe695185fc0e","Type":"ContainerDied","Data":"d80f07b146ec970efe0845f35e8e45dedfbf196fab8af95271167ad3985f41a5"} Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.577200 4787 scope.go:117] "RemoveContainer" containerID="b5123fedff0152a767d5feeda526dcd30d31bcb07ccdf31903adad1ab4b38374" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.577312 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bd66c8fd6-b6vcd" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.582556 4787 generic.go:334] "Generic (PLEG): container finished" podID="d1eab9cd-4b1b-42e8-973b-b57122f8b293" containerID="68530f4faae8bf8c7243c2403909396a037e76649b77f5938368393ac3e98a50" exitCode=0 Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.582757 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l5jm" event={"ID":"d1eab9cd-4b1b-42e8-973b-b57122f8b293","Type":"ContainerDied","Data":"68530f4faae8bf8c7243c2403909396a037e76649b77f5938368393ac3e98a50"} Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.582799 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l5jm" event={"ID":"d1eab9cd-4b1b-42e8-973b-b57122f8b293","Type":"ContainerStarted","Data":"1db7950e2dde19bbc4e6f6df5cde9a39381747e1bdb3e313fc24d34b53bb3a7d"} Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.589741 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerStarted","Data":"a03f1084bf373a9bdd82a234b100516fab18302c57f6d2a2a902e123f1db0d79"} Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.592577 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-mnh6r" podStartSLOduration=10.592556417 podStartE2EDuration="10.592556417s" podCreationTimestamp="2026-02-19 19:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:14.58953292 +0000 UTC m=+1282.380198872" watchObservedRunningTime="2026-02-19 19:40:14.592556417 +0000 UTC m=+1282.383222359" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.633747 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-bd66c8fd6-b6vcd"] Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.646483 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-bd66c8fd6-b6vcd"] Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.652275 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=5.356654041 podStartE2EDuration="1m0.652254307s" podCreationTimestamp="2026-02-19 19:39:14 +0000 UTC" firstStartedPulling="2026-02-19 19:39:17.763704026 +0000 UTC m=+1225.554369958" lastFinishedPulling="2026-02-19 19:40:13.059304282 +0000 UTC m=+1280.849970224" observedRunningTime="2026-02-19 19:40:14.646246084 +0000 UTC m=+1282.436912036" watchObservedRunningTime="2026-02-19 19:40:14.652254307 +0000 UTC m=+1282.442920249" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.730749 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 19:40:14 crc kubenswrapper[4787]: I0219 19:40:14.903499 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59942447-448c-4d2e-b1ac-fe695185fc0e" path="/var/lib/kubelet/pods/59942447-448c-4d2e-b1ac-fe695185fc0e/volumes" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.116626 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6"] Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117228 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9002f19b-58a4-49df-9f61-945f8bca211e" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117255 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9002f19b-58a4-49df-9f61-945f8bca211e" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117279 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59942447-448c-4d2e-b1ac-fe695185fc0e" containerName="console" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117288 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59942447-448c-4d2e-b1ac-fe695185fc0e" containerName="console" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117305 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c20b79d-45cd-476a-9309-d7850a869dd8" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117316 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c20b79d-45cd-476a-9309-d7850a869dd8" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117330 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c490b914-c022-49de-a191-891ca4991459" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117340 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c490b914-c022-49de-a191-891ca4991459" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117354 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80f1688-cd91-4c7e-a26a-20763f129b82" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117362 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80f1688-cd91-4c7e-a26a-20763f129b82" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117377 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe5aa4b-4d38-4918-adc4-b507bb6f1317" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117385 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe5aa4b-4d38-4918-adc4-b507bb6f1317" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117394 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd860c-7fed-409a-92ca-374370b9e80f" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117401 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd860c-7fed-409a-92ca-374370b9e80f" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117417 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d74f59-7434-4c00-8097-59f873601963" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117424 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d74f59-7434-4c00-8097-59f873601963" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: E0219 19:40:15.117437 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7643078-1dbe-4a7e-9dee-7a4886d87d5e" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117446 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7643078-1dbe-4a7e-9dee-7a4886d87d5e" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117678 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9002f19b-58a4-49df-9f61-945f8bca211e" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117698 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c20b79d-45cd-476a-9309-d7850a869dd8" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117712 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe5aa4b-4d38-4918-adc4-b507bb6f1317" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117730 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59942447-448c-4d2e-b1ac-fe695185fc0e" containerName="console" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117738 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dd860c-7fed-409a-92ca-374370b9e80f" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117754 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c490b914-c022-49de-a191-891ca4991459" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117766 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7643078-1dbe-4a7e-9dee-7a4886d87d5e" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117780 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d74f59-7434-4c00-8097-59f873601963" containerName="mariadb-database-create" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.117792 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80f1688-cd91-4c7e-a26a-20763f129b82" containerName="mariadb-account-create-update" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.118642 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.130903 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6"] Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.223461 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-c19d-account-create-update-fhvjl"] Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.225000 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.226859 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.233457 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c19d-account-create-update-fhvjl"] Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.258642 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16996041-3a8b-4ed9-a1f6-830900b59b28-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-5l6s6\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.258898 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwjn\" (UniqueName: \"kubernetes.io/projected/16996041-3a8b-4ed9-a1f6-830900b59b28-kube-api-access-ptwjn\") pod \"mysqld-exporter-openstack-cell1-db-create-5l6s6\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.360402 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t287t\" (UniqueName: \"kubernetes.io/projected/77aba799-1dab-40af-a989-33dc600ad004-kube-api-access-t287t\") pod \"mysqld-exporter-c19d-account-create-update-fhvjl\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.360500 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16996041-3a8b-4ed9-a1f6-830900b59b28-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-5l6s6\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.360555 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwjn\" (UniqueName: \"kubernetes.io/projected/16996041-3a8b-4ed9-a1f6-830900b59b28-kube-api-access-ptwjn\") pod \"mysqld-exporter-openstack-cell1-db-create-5l6s6\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.360585 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77aba799-1dab-40af-a989-33dc600ad004-operator-scripts\") pod \"mysqld-exporter-c19d-account-create-update-fhvjl\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.361286 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16996041-3a8b-4ed9-a1f6-830900b59b28-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-5l6s6\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.384435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwjn\" (UniqueName: \"kubernetes.io/projected/16996041-3a8b-4ed9-a1f6-830900b59b28-kube-api-access-ptwjn\") pod \"mysqld-exporter-openstack-cell1-db-create-5l6s6\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.446013 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.463258 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t287t\" (UniqueName: \"kubernetes.io/projected/77aba799-1dab-40af-a989-33dc600ad004-kube-api-access-t287t\") pod \"mysqld-exporter-c19d-account-create-update-fhvjl\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.463486 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77aba799-1dab-40af-a989-33dc600ad004-operator-scripts\") pod \"mysqld-exporter-c19d-account-create-update-fhvjl\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.464510 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77aba799-1dab-40af-a989-33dc600ad004-operator-scripts\") pod \"mysqld-exporter-c19d-account-create-update-fhvjl\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.480386 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t287t\" (UniqueName: \"kubernetes.io/projected/77aba799-1dab-40af-a989-33dc600ad004-kube-api-access-t287t\") pod \"mysqld-exporter-c19d-account-create-update-fhvjl\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:15 crc kubenswrapper[4787]: I0219 19:40:15.549073 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.344542 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.345332 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.367412 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.525345 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.622090 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9l5jm" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.622081 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9l5jm" event={"ID":"d1eab9cd-4b1b-42e8-973b-b57122f8b293","Type":"ContainerDied","Data":"1db7950e2dde19bbc4e6f6df5cde9a39381747e1bdb3e313fc24d34b53bb3a7d"} Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.622218 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db7950e2dde19bbc4e6f6df5cde9a39381747e1bdb3e313fc24d34b53bb3a7d" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.624696 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bk7pj" event={"ID":"f48cb9d5-9e69-4553-b61e-e0bde367ffc7","Type":"ContainerStarted","Data":"593d4b81d28dbe1a54c20c98fea60d9004d4aef613e4294ba8c7f4240c81f9ee"} Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.631665 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.687643 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1eab9cd-4b1b-42e8-973b-b57122f8b293-operator-scripts\") pod \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.687763 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh69z\" (UniqueName: \"kubernetes.io/projected/d1eab9cd-4b1b-42e8-973b-b57122f8b293-kube-api-access-qh69z\") pod \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\" (UID: \"d1eab9cd-4b1b-42e8-973b-b57122f8b293\") " Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.689511 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1eab9cd-4b1b-42e8-973b-b57122f8b293-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1eab9cd-4b1b-42e8-973b-b57122f8b293" (UID: "d1eab9cd-4b1b-42e8-973b-b57122f8b293"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.692092 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bk7pj" podStartSLOduration=1.8663102889999998 podStartE2EDuration="10.692070393s" podCreationTimestamp="2026-02-19 19:40:06 +0000 UTC" firstStartedPulling="2026-02-19 19:40:07.499423361 +0000 UTC m=+1275.290089303" lastFinishedPulling="2026-02-19 19:40:16.325183465 +0000 UTC m=+1284.115849407" observedRunningTime="2026-02-19 19:40:16.653873293 +0000 UTC m=+1284.444539245" watchObservedRunningTime="2026-02-19 19:40:16.692070393 +0000 UTC m=+1284.482736335" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.705030 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1eab9cd-4b1b-42e8-973b-b57122f8b293-kube-api-access-qh69z" (OuterVolumeSpecName: "kube-api-access-qh69z") pod "d1eab9cd-4b1b-42e8-973b-b57122f8b293" (UID: "d1eab9cd-4b1b-42e8-973b-b57122f8b293"). InnerVolumeSpecName "kube-api-access-qh69z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.792155 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1eab9cd-4b1b-42e8-973b-b57122f8b293-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.792186 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh69z\" (UniqueName: \"kubernetes.io/projected/d1eab9cd-4b1b-42e8-973b-b57122f8b293-kube-api-access-qh69z\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:16 crc kubenswrapper[4787]: W0219 19:40:16.806154 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16996041_3a8b_4ed9_a1f6_830900b59b28.slice/crio-6cb4595e669590f606811ba240e2fcf23865d29b9d06d8a1c3ef15e525d293ce WatchSource:0}: Error finding container 6cb4595e669590f606811ba240e2fcf23865d29b9d06d8a1c3ef15e525d293ce: Status 404 returned error can't find the container with id 6cb4595e669590f606811ba240e2fcf23865d29b9d06d8a1c3ef15e525d293ce Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.808497 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6"] Feb 19 19:40:16 crc kubenswrapper[4787]: I0219 19:40:16.927044 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-c19d-account-create-update-fhvjl"] Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.482349 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n4c8f" podUID="19e04b88-069d-4c44-9511-ed765c0424ae" containerName="ovn-controller" probeResult="failure" output=< Feb 19 19:40:17 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 19:40:17 crc kubenswrapper[4787]: > Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.494478 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-b84c5"] Feb 19 19:40:17 crc kubenswrapper[4787]: E0219 19:40:17.495013 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1eab9cd-4b1b-42e8-973b-b57122f8b293" containerName="mariadb-account-create-update" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.495034 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1eab9cd-4b1b-42e8-973b-b57122f8b293" containerName="mariadb-account-create-update" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.495250 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1eab9cd-4b1b-42e8-973b-b57122f8b293" containerName="mariadb-account-create-update" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.496027 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.499487 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kjfdf" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.499699 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.506362 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b84c5"] Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.634754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-config-data\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.634829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzqw\" (UniqueName: \"kubernetes.io/projected/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-kube-api-access-6fzqw\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.634888 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-db-sync-config-data\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.634966 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-combined-ca-bundle\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.636419 4787 generic.go:334] "Generic (PLEG): container finished" podID="77aba799-1dab-40af-a989-33dc600ad004" containerID="ace57fb0950953aaada5a8dcb6dddd51c9298dd43f3eb6bd0f613c2216375d6e" exitCode=0 Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.636481 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" event={"ID":"77aba799-1dab-40af-a989-33dc600ad004","Type":"ContainerDied","Data":"ace57fb0950953aaada5a8dcb6dddd51c9298dd43f3eb6bd0f613c2216375d6e"} Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.636509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" event={"ID":"77aba799-1dab-40af-a989-33dc600ad004","Type":"ContainerStarted","Data":"8c9e56ed579b77dd216785437befd9a3974124ee1c417521fdd791b35643c458"} Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.638336 4787 generic.go:334] "Generic (PLEG): container finished" podID="80458aec-a844-4f4d-b618-56bdc811cd43" containerID="b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8" exitCode=0 Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.638393 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80458aec-a844-4f4d-b618-56bdc811cd43","Type":"ContainerDied","Data":"b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8"} Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.642799 4787 generic.go:334] "Generic (PLEG): container finished" podID="16996041-3a8b-4ed9-a1f6-830900b59b28" containerID="4eea30e1c2d57cab6dca8c1e9ed34e82fd906aa930c93182260e4bc8ff45e3f8" exitCode=0 Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.642963 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" event={"ID":"16996041-3a8b-4ed9-a1f6-830900b59b28","Type":"ContainerDied","Data":"4eea30e1c2d57cab6dca8c1e9ed34e82fd906aa930c93182260e4bc8ff45e3f8"} Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.643010 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" event={"ID":"16996041-3a8b-4ed9-a1f6-830900b59b28","Type":"ContainerStarted","Data":"6cb4595e669590f606811ba240e2fcf23865d29b9d06d8a1c3ef15e525d293ce"} Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.644587 4787 generic.go:334] "Generic (PLEG): container finished" podID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerID="d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9" exitCode=0 Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.644631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14da78cc-cd10-440d-9983-6e80d45f3e31","Type":"ContainerDied","Data":"d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9"} Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.647011 4787 generic.go:334] "Generic (PLEG): container finished" podID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerID="b981e8e43871a722314887a58a1a921805ea8edc9b175836152b862fcf86e7c3" exitCode=0 Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.647984 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"278d26c1-8a7c-4278-b84c-0c0c24d81f52","Type":"ContainerDied","Data":"b981e8e43871a722314887a58a1a921805ea8edc9b175836152b862fcf86e7c3"} Feb 19 19:40:17 crc kubenswrapper[4787]: E0219 19:40:17.680469 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14da78cc_cd10_440d_9983_6e80d45f3e31.slice/crio-conmon-d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14da78cc_cd10_440d_9983_6e80d45f3e31.slice/crio-d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80458aec_a844_4f4d_b618_56bdc811cd43.slice/crio-b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8.scope\": RecentStats: unable to find data in memory cache]" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.738891 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-config-data\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.739069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzqw\" (UniqueName: \"kubernetes.io/projected/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-kube-api-access-6fzqw\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.739210 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-db-sync-config-data\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.739407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-combined-ca-bundle\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.747196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-db-sync-config-data\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.757813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-config-data\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.765488 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-combined-ca-bundle\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.773744 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzqw\" (UniqueName: \"kubernetes.io/projected/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-kube-api-access-6fzqw\") pod \"glance-db-sync-b84c5\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:17 crc kubenswrapper[4787]: I0219 19:40:17.977302 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.662007 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80458aec-a844-4f4d-b618-56bdc811cd43","Type":"ContainerStarted","Data":"9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17"} Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.662774 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.666253 4787 generic.go:334] "Generic (PLEG): container finished" podID="769a015d-4883-474b-a4e8-45a2b77f2412" containerID="02ff1da193c93f126feece115773dd2392f6318ce735af54242365046cdaebba" exitCode=0 Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.666339 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"769a015d-4883-474b-a4e8-45a2b77f2412","Type":"ContainerDied","Data":"02ff1da193c93f126feece115773dd2392f6318ce735af54242365046cdaebba"} Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.672633 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14da78cc-cd10-440d-9983-6e80d45f3e31","Type":"ContainerStarted","Data":"b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4"} Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.676132 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.682948 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"278d26c1-8a7c-4278-b84c-0c0c24d81f52","Type":"ContainerStarted","Data":"07746b23ec7ee77675eb24404beadeb71529a82afdac8aa4c22ed541dcd36713"} Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.683867 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.724318 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b84c5"] Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.724516 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.491596954 podStartE2EDuration="1m11.724495907s" podCreationTimestamp="2026-02-19 19:39:07 +0000 UTC" firstStartedPulling="2026-02-19 19:39:09.773313867 +0000 UTC m=+1217.563979809" lastFinishedPulling="2026-02-19 19:39:41.00621282 +0000 UTC m=+1248.796878762" observedRunningTime="2026-02-19 19:40:18.716211709 +0000 UTC m=+1286.506877671" watchObservedRunningTime="2026-02-19 19:40:18.724495907 +0000 UTC m=+1286.515161849" Feb 19 19:40:18 crc kubenswrapper[4787]: I0219 19:40:18.759146 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371965.095648 podStartE2EDuration="1m11.759127105s" podCreationTimestamp="2026-02-19 19:39:07 +0000 UTC" firstStartedPulling="2026-02-19 19:39:10.064918273 +0000 UTC m=+1217.855584225" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:18.748426307 +0000 UTC m=+1286.539092249" watchObservedRunningTime="2026-02-19 19:40:18.759127105 +0000 UTC m=+1286.549793047" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.313948 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.331602 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371964.523197 podStartE2EDuration="1m12.331578874s" podCreationTimestamp="2026-02-19 19:39:07 +0000 UTC" firstStartedPulling="2026-02-19 19:39:09.975513153 +0000 UTC m=+1217.766179095" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:18.833677382 +0000 UTC m=+1286.624343324" watchObservedRunningTime="2026-02-19 19:40:19.331578874 +0000 UTC m=+1287.122244826" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.388262 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.403891 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.406087 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwjn\" (UniqueName: \"kubernetes.io/projected/16996041-3a8b-4ed9-a1f6-830900b59b28-kube-api-access-ptwjn\") pod \"16996041-3a8b-4ed9-a1f6-830900b59b28\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.406166 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16996041-3a8b-4ed9-a1f6-830900b59b28-operator-scripts\") pod \"16996041-3a8b-4ed9-a1f6-830900b59b28\" (UID: \"16996041-3a8b-4ed9-a1f6-830900b59b28\") " Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.407519 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16996041-3a8b-4ed9-a1f6-830900b59b28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16996041-3a8b-4ed9-a1f6-830900b59b28" (UID: "16996041-3a8b-4ed9-a1f6-830900b59b28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.413150 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16996041-3a8b-4ed9-a1f6-830900b59b28-kube-api-access-ptwjn" (OuterVolumeSpecName: "kube-api-access-ptwjn") pod "16996041-3a8b-4ed9-a1f6-830900b59b28" (UID: "16996041-3a8b-4ed9-a1f6-830900b59b28"). InnerVolumeSpecName "kube-api-access-ptwjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.508558 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77aba799-1dab-40af-a989-33dc600ad004-operator-scripts\") pod \"77aba799-1dab-40af-a989-33dc600ad004\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.508676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t287t\" (UniqueName: \"kubernetes.io/projected/77aba799-1dab-40af-a989-33dc600ad004-kube-api-access-t287t\") pod \"77aba799-1dab-40af-a989-33dc600ad004\" (UID: \"77aba799-1dab-40af-a989-33dc600ad004\") " Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.510091 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77aba799-1dab-40af-a989-33dc600ad004-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77aba799-1dab-40af-a989-33dc600ad004" (UID: "77aba799-1dab-40af-a989-33dc600ad004"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.511030 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77aba799-1dab-40af-a989-33dc600ad004-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.511068 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwjn\" (UniqueName: \"kubernetes.io/projected/16996041-3a8b-4ed9-a1f6-830900b59b28-kube-api-access-ptwjn\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.511084 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16996041-3a8b-4ed9-a1f6-830900b59b28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.518849 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77aba799-1dab-40af-a989-33dc600ad004-kube-api-access-t287t" (OuterVolumeSpecName: "kube-api-access-t287t") pod "77aba799-1dab-40af-a989-33dc600ad004" (UID: "77aba799-1dab-40af-a989-33dc600ad004"). InnerVolumeSpecName "kube-api-access-t287t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.612490 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t287t\" (UniqueName: \"kubernetes.io/projected/77aba799-1dab-40af-a989-33dc600ad004-kube-api-access-t287t\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.694843 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" event={"ID":"77aba799-1dab-40af-a989-33dc600ad004","Type":"ContainerDied","Data":"8c9e56ed579b77dd216785437befd9a3974124ee1c417521fdd791b35643c458"} Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.694880 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c9e56ed579b77dd216785437befd9a3974124ee1c417521fdd791b35643c458" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.694937 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-c19d-account-create-update-fhvjl" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.707077 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"769a015d-4883-474b-a4e8-45a2b77f2412","Type":"ContainerStarted","Data":"409d0d347d59a437026f283198fe2e5aeafaf1b69b9ca6360c526effd05789dc"} Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.707316 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.713521 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" event={"ID":"16996041-3a8b-4ed9-a1f6-830900b59b28","Type":"ContainerDied","Data":"6cb4595e669590f606811ba240e2fcf23865d29b9d06d8a1c3ef15e525d293ce"} Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.713548 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb4595e669590f606811ba240e2fcf23865d29b9d06d8a1c3ef15e525d293ce" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.713599 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6" Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.724916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b84c5" event={"ID":"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570","Type":"ContainerStarted","Data":"0cc7224a69e35b5b24f879b2843d0dcfa03ea4000652ead0d702b7ed7a8f64d6"} Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.725140 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="prometheus" containerID="cri-o://2db65f035436118b6796dbab1e9c7b0c372675ae01fa9c36fcd6cf9b0cfb0e8a" gracePeriod=600 Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.725788 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="thanos-sidecar" containerID="cri-o://a03f1084bf373a9bdd82a234b100516fab18302c57f6d2a2a902e123f1db0d79" gracePeriod=600 Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.725850 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="config-reloader" containerID="cri-o://1e53372b9cadb5188d3d20605334d615602e746930def99032ef4799572022e6" gracePeriod=600 Feb 19 19:40:19 crc kubenswrapper[4787]: I0219 19:40:19.737214 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371965.117577 podStartE2EDuration="1m11.737198297s" podCreationTimestamp="2026-02-19 19:39:08 +0000 UTC" firstStartedPulling="2026-02-19 19:39:10.197338116 +0000 UTC m=+1217.988004058" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:19.733866751 +0000 UTC m=+1287.524532693" watchObservedRunningTime="2026-02-19 19:40:19.737198297 +0000 UTC m=+1287.527864239" Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.238759 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.335371 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vn96g"] Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.335596 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerName="dnsmasq-dns" containerID="cri-o://3e2ca2fb541d0ab37f4bb376032c7651987295414eacedc7f13b79c782afb86c" gracePeriod=10 Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.498926 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9l5jm"] Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.514618 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9l5jm"] Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.743363 4787 generic.go:334] "Generic (PLEG): container finished" podID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerID="a03f1084bf373a9bdd82a234b100516fab18302c57f6d2a2a902e123f1db0d79" exitCode=0 Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.743712 4787 generic.go:334] "Generic (PLEG): container finished" podID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerID="1e53372b9cadb5188d3d20605334d615602e746930def99032ef4799572022e6" exitCode=0 Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.743724 4787 generic.go:334] "Generic (PLEG): container finished" podID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerID="2db65f035436118b6796dbab1e9c7b0c372675ae01fa9c36fcd6cf9b0cfb0e8a" exitCode=0 Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.743776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerDied","Data":"a03f1084bf373a9bdd82a234b100516fab18302c57f6d2a2a902e123f1db0d79"} Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.743805 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerDied","Data":"1e53372b9cadb5188d3d20605334d615602e746930def99032ef4799572022e6"} Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.743818 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerDied","Data":"2db65f035436118b6796dbab1e9c7b0c372675ae01fa9c36fcd6cf9b0cfb0e8a"} Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.746772 4787 generic.go:334] "Generic (PLEG): container finished" podID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerID="3e2ca2fb541d0ab37f4bb376032c7651987295414eacedc7f13b79c782afb86c" exitCode=0 Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.747919 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" event={"ID":"e629dc49-6c0f-43b2-844c-88658e0dd5ac","Type":"ContainerDied","Data":"3e2ca2fb541d0ab37f4bb376032c7651987295414eacedc7f13b79c782afb86c"} Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.916964 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1eab9cd-4b1b-42e8-973b-b57122f8b293" path="/var/lib/kubelet/pods/d1eab9cd-4b1b-42e8-973b-b57122f8b293/volumes" Feb 19 19:40:20 crc kubenswrapper[4787]: I0219 19:40:20.988035 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.044713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.044880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.044906 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nth6t\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-kube-api-access-nth6t\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.044929 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-thanos-prometheus-http-client-file\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.044976 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-web-config\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.045006 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config-out\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.045101 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-0\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.045220 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-1\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.045279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-2\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.045317 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-tls-assets\") pod \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\" (UID: \"1ea71272-2b73-49c3-a5e2-57e7ac632a7f\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.046744 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.047045 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.047286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.057809 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-kube-api-access-nth6t" (OuterVolumeSpecName: "kube-api-access-nth6t") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "kube-api-access-nth6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.059776 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config" (OuterVolumeSpecName: "config") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.061183 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config-out" (OuterVolumeSpecName: "config-out") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.062320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.064041 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.109626 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-web-config" (OuterVolumeSpecName: "web-config") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147256 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nth6t\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-kube-api-access-nth6t\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147460 4787 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147521 4787 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147573 4787 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147639 4787 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147695 4787 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147745 4787 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147803 4787 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.147858 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ea71272-2b73-49c3-a5e2-57e7ac632a7f-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.177935 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.201349 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1ea71272-2b73-49c3-a5e2-57e7ac632a7f" (UID: "1ea71272-2b73-49c3-a5e2-57e7ac632a7f"). InnerVolumeSpecName "pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.253019 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6rfb\" (UniqueName: \"kubernetes.io/projected/e629dc49-6c0f-43b2-844c-88658e0dd5ac-kube-api-access-j6rfb\") pod \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.253110 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-nb\") pod \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.253141 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-dns-svc\") pod \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.253209 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-config\") pod \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.253313 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-sb\") pod \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\" (UID: \"e629dc49-6c0f-43b2-844c-88658e0dd5ac\") " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.253768 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") on node \"crc\" " Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.263857 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e629dc49-6c0f-43b2-844c-88658e0dd5ac-kube-api-access-j6rfb" (OuterVolumeSpecName: "kube-api-access-j6rfb") pod "e629dc49-6c0f-43b2-844c-88658e0dd5ac" (UID: "e629dc49-6c0f-43b2-844c-88658e0dd5ac"). InnerVolumeSpecName "kube-api-access-j6rfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.291366 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.291875 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0") on node "crc" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.319445 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e629dc49-6c0f-43b2-844c-88658e0dd5ac" (UID: "e629dc49-6c0f-43b2-844c-88658e0dd5ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.324483 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-config" (OuterVolumeSpecName: "config") pod "e629dc49-6c0f-43b2-844c-88658e0dd5ac" (UID: "e629dc49-6c0f-43b2-844c-88658e0dd5ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.338000 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e629dc49-6c0f-43b2-844c-88658e0dd5ac" (UID: "e629dc49-6c0f-43b2-844c-88658e0dd5ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.339077 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e629dc49-6c0f-43b2-844c-88658e0dd5ac" (UID: "e629dc49-6c0f-43b2-844c-88658e0dd5ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.355133 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.355334 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.355390 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.355452 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e629dc49-6c0f-43b2-844c-88658e0dd5ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.355525 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.355619 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6rfb\" (UniqueName: \"kubernetes.io/projected/e629dc49-6c0f-43b2-844c-88658e0dd5ac-kube-api-access-j6rfb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.758819 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" event={"ID":"e629dc49-6c0f-43b2-844c-88658e0dd5ac","Type":"ContainerDied","Data":"22460d1c3bc38956dc2c43260602431b6ba6b8380e860929b52ea3a1be8ac02d"} Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.758899 4787 scope.go:117] "RemoveContainer" containerID="3e2ca2fb541d0ab37f4bb376032c7651987295414eacedc7f13b79c782afb86c" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.759063 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vn96g" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.762554 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.762715 4787 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.762729 4787 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.762774 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift podName:3109e7cb-bd74-40d5-a2ab-deb7a9794d44 nodeName:}" failed. No retries permitted until 2026-02-19 19:40:37.762759884 +0000 UTC m=+1305.553425816 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift") pod "swift-storage-0" (UID: "3109e7cb-bd74-40d5-a2ab-deb7a9794d44") : configmap "swift-ring-files" not found Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.766744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1ea71272-2b73-49c3-a5e2-57e7ac632a7f","Type":"ContainerDied","Data":"c0d0e826aacf570a37c1b8bff11a3db940a0ff25fb452ecbba360abd3dd7d4c7"} Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.766829 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.808188 4787 scope.go:117] "RemoveContainer" containerID="0a670c55f01c079f50c57c070c844bffc4d20c696128d5a63cf4c5018a0e8622" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.845180 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vn96g"] Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.871724 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vn96g"] Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.888709 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.893532 4787 scope.go:117] "RemoveContainer" containerID="a03f1084bf373a9bdd82a234b100516fab18302c57f6d2a2a902e123f1db0d79" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.900852 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910016 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910618 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="thanos-sidecar" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910636 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="thanos-sidecar" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910652 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerName="dnsmasq-dns" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910659 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerName="dnsmasq-dns" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910671 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16996041-3a8b-4ed9-a1f6-830900b59b28" containerName="mariadb-database-create" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910676 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="16996041-3a8b-4ed9-a1f6-830900b59b28" containerName="mariadb-database-create" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910690 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77aba799-1dab-40af-a989-33dc600ad004" containerName="mariadb-account-create-update" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910696 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="77aba799-1dab-40af-a989-33dc600ad004" containerName="mariadb-account-create-update" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910709 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="prometheus" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910715 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="prometheus" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910734 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="init-config-reloader" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910740 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="init-config-reloader" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910765 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="config-reloader" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910772 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="config-reloader" Feb 19 19:40:21 crc kubenswrapper[4787]: E0219 19:40:21.910781 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerName="init" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910787 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerName="init" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910963 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="config-reloader" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910977 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="thanos-sidecar" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.910990 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="16996041-3a8b-4ed9-a1f6-830900b59b28" containerName="mariadb-database-create" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.911004 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" containerName="prometheus" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.911012 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="77aba799-1dab-40af-a989-33dc600ad004" containerName="mariadb-account-create-update" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.911022 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" containerName="dnsmasq-dns" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.913013 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.917090 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.917712 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.918133 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.918326 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.918577 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-ggzf2" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.918799 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.918805 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.918939 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.922352 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.924186 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.961827 4787 scope.go:117] "RemoveContainer" containerID="1e53372b9cadb5188d3d20605334d615602e746930def99032ef4799572022e6" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.970708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/659dcc4f-0134-40f4-a6ee-150bb5dee79b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.970773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.970802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.970875 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.970951 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.970989 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971050 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rth9l\" (UniqueName: \"kubernetes.io/projected/659dcc4f-0134-40f4-a6ee-150bb5dee79b-kube-api-access-rth9l\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971083 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971110 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-config\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971146 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971171 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971198 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/659dcc4f-0134-40f4-a6ee-150bb5dee79b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.971232 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:21 crc kubenswrapper[4787]: I0219 19:40:21.989967 4787 scope.go:117] "RemoveContainer" containerID="2db65f035436118b6796dbab1e9c7b0c372675ae01fa9c36fcd6cf9b0cfb0e8a" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.012454 4787 scope.go:117] "RemoveContainer" containerID="d35f0c51c1a567437a372c4879730c992c298e937698a29a921fbc5e015a9771" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rth9l\" (UniqueName: \"kubernetes.io/projected/659dcc4f-0134-40f4-a6ee-150bb5dee79b-kube-api-access-rth9l\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073704 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-config\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073775 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073829 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073868 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/659dcc4f-0134-40f4-a6ee-150bb5dee79b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.073920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.074008 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/659dcc4f-0134-40f4-a6ee-150bb5dee79b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.074061 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.074095 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.074228 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.074369 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.074435 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.075981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.076018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.079343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-config\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.079723 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.080140 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/659dcc4f-0134-40f4-a6ee-150bb5dee79b-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.080569 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/659dcc4f-0134-40f4-a6ee-150bb5dee79b-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.080742 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.080816 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fb78e0f464557884d12dba3876706e0230d208030e57985f04fcfaeb4b1f767e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.081543 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/659dcc4f-0134-40f4-a6ee-150bb5dee79b-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.081983 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.083353 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.086849 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.091163 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/659dcc4f-0134-40f4-a6ee-150bb5dee79b-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.103721 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rth9l\" (UniqueName: \"kubernetes.io/projected/659dcc4f-0134-40f4-a6ee-150bb5dee79b-kube-api-access-rth9l\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.173888 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e403fa1-0d8a-4a0f-a415-0a7467e796c0\") pod \"prometheus-metric-storage-0\" (UID: \"659dcc4f-0134-40f4-a6ee-150bb5dee79b\") " pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.259814 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.474003 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n4c8f" podUID="19e04b88-069d-4c44-9511-ed765c0424ae" containerName="ovn-controller" probeResult="failure" output=< Feb 19 19:40:22 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 19:40:22 crc kubenswrapper[4787]: > Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.766178 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.907404 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea71272-2b73-49c3-a5e2-57e7ac632a7f" path="/var/lib/kubelet/pods/1ea71272-2b73-49c3-a5e2-57e7ac632a7f/volumes" Feb 19 19:40:22 crc kubenswrapper[4787]: I0219 19:40:22.908399 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e629dc49-6c0f-43b2-844c-88658e0dd5ac" path="/var/lib/kubelet/pods/e629dc49-6c0f-43b2-844c-88658e0dd5ac/volumes" Feb 19 19:40:23 crc kubenswrapper[4787]: I0219 19:40:23.803278 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"659dcc4f-0134-40f4-a6ee-150bb5dee79b","Type":"ContainerStarted","Data":"125fbaf4d65a984ee011a5f6afc0d6db63b17f084e81184bc681bcbfac46f870"} Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.494307 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qhkfm"] Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.496306 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.498569 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.508777 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qhkfm"] Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.549042 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwd6\" (UniqueName: \"kubernetes.io/projected/9057a20e-ff30-454d-8c86-d42f7543571e-kube-api-access-2jwd6\") pod \"root-account-create-update-qhkfm\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.549332 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9057a20e-ff30-454d-8c86-d42f7543571e-operator-scripts\") pod \"root-account-create-update-qhkfm\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.621042 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.622337 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.624150 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.638848 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.651741 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwd6\" (UniqueName: \"kubernetes.io/projected/9057a20e-ff30-454d-8c86-d42f7543571e-kube-api-access-2jwd6\") pod \"root-account-create-update-qhkfm\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.651800 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-config-data\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.651897 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9057a20e-ff30-454d-8c86-d42f7543571e-operator-scripts\") pod \"root-account-create-update-qhkfm\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.651929 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwx25\" (UniqueName: \"kubernetes.io/projected/fa69714c-41e5-4477-9267-303589d519de-kube-api-access-zwx25\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.651969 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.653255 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9057a20e-ff30-454d-8c86-d42f7543571e-operator-scripts\") pod \"root-account-create-update-qhkfm\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.683953 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwd6\" (UniqueName: \"kubernetes.io/projected/9057a20e-ff30-454d-8c86-d42f7543571e-kube-api-access-2jwd6\") pod \"root-account-create-update-qhkfm\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.754265 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-config-data\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.754383 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwx25\" (UniqueName: \"kubernetes.io/projected/fa69714c-41e5-4477-9267-303589d519de-kube-api-access-zwx25\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.754423 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.760328 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-config-data\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.770476 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.782637 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwx25\" (UniqueName: \"kubernetes.io/projected/fa69714c-41e5-4477-9267-303589d519de-kube-api-access-zwx25\") pod \"mysqld-exporter-0\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " pod="openstack/mysqld-exporter-0" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.818995 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.846065 4787 generic.go:334] "Generic (PLEG): container finished" podID="f48cb9d5-9e69-4553-b61e-e0bde367ffc7" containerID="593d4b81d28dbe1a54c20c98fea60d9004d4aef613e4294ba8c7f4240c81f9ee" exitCode=0 Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.846109 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bk7pj" event={"ID":"f48cb9d5-9e69-4553-b61e-e0bde367ffc7","Type":"ContainerDied","Data":"593d4b81d28dbe1a54c20c98fea60d9004d4aef613e4294ba8c7f4240c81f9ee"} Feb 19 19:40:25 crc kubenswrapper[4787]: I0219 19:40:25.943294 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 19:40:26 crc kubenswrapper[4787]: I0219 19:40:26.856980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"659dcc4f-0134-40f4-a6ee-150bb5dee79b","Type":"ContainerStarted","Data":"91f1b9825cfcd810c5a1290ab756642e899e582eb58853f18b0392a155a66dda"} Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.449813 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n4c8f" podUID="19e04b88-069d-4c44-9511-ed765c0424ae" containerName="ovn-controller" probeResult="failure" output=< Feb 19 19:40:27 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 19:40:27 crc kubenswrapper[4787]: > Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.516585 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.525796 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x42pw" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.793246 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n4c8f-config-gbdqx"] Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.795012 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.803036 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.901506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.901569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-additional-scripts\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.901598 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879q6\" (UniqueName: \"kubernetes.io/projected/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-kube-api-access-879q6\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.901671 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run-ovn\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.901690 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-log-ovn\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.901842 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-scripts\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:27 crc kubenswrapper[4787]: I0219 19:40:27.905974 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n4c8f-config-gbdqx"] Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.005063 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-scripts\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.005244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.005295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-additional-scripts\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.005340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-879q6\" (UniqueName: \"kubernetes.io/projected/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-kube-api-access-879q6\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.005407 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run-ovn\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.005434 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-log-ovn\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.006322 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.007597 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-additional-scripts\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.008639 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run-ovn\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.008700 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-log-ovn\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.009530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-scripts\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.039192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-879q6\" (UniqueName: \"kubernetes.io/projected/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-kube-api-access-879q6\") pod \"ovn-controller-n4c8f-config-gbdqx\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.156721 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:28 crc kubenswrapper[4787]: I0219 19:40:28.961361 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 19 19:40:29 crc kubenswrapper[4787]: I0219 19:40:29.082526 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 19 19:40:29 crc kubenswrapper[4787]: I0219 19:40:29.303250 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 19 19:40:29 crc kubenswrapper[4787]: I0219 19:40:29.447260 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 19 19:40:31 crc kubenswrapper[4787]: I0219 19:40:31.924547 4787 generic.go:334] "Generic (PLEG): container finished" podID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerID="91f1b9825cfcd810c5a1290ab756642e899e582eb58853f18b0392a155a66dda" exitCode=0 Feb 19 19:40:31 crc kubenswrapper[4787]: I0219 19:40:31.924679 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"659dcc4f-0134-40f4-a6ee-150bb5dee79b","Type":"ContainerDied","Data":"91f1b9825cfcd810c5a1290ab756642e899e582eb58853f18b0392a155a66dda"} Feb 19 19:40:32 crc kubenswrapper[4787]: I0219 19:40:32.441886 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n4c8f" podUID="19e04b88-069d-4c44-9511-ed765c0424ae" containerName="ovn-controller" probeResult="failure" output=< Feb 19 19:40:32 crc kubenswrapper[4787]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 19:40:32 crc kubenswrapper[4787]: > Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.831019 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930190 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-dispersionconf\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930259 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wgzz\" (UniqueName: \"kubernetes.io/projected/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-kube-api-access-6wgzz\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930297 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-scripts\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930330 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-combined-ca-bundle\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930368 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-swiftconf\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930390 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-ring-data-devices\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.930413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-etc-swift\") pod \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\" (UID: \"f48cb9d5-9e69-4553-b61e-e0bde367ffc7\") " Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.931524 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.932783 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.933292 4787 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.933312 4787 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.938390 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-kube-api-access-6wgzz" (OuterVolumeSpecName: "kube-api-access-6wgzz") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "kube-api-access-6wgzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.939776 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.954378 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bk7pj" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.954890 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bk7pj" event={"ID":"f48cb9d5-9e69-4553-b61e-e0bde367ffc7","Type":"ContainerDied","Data":"07cc8955e34994dcf87a2d76cfdbab9170690a7f83e0aa4c6f36d8895d5b8e51"} Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.954921 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07cc8955e34994dcf87a2d76cfdbab9170690a7f83e0aa4c6f36d8895d5b8e51" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.960220 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-scripts" (OuterVolumeSpecName: "scripts") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.964003 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"659dcc4f-0134-40f4-a6ee-150bb5dee79b","Type":"ContainerStarted","Data":"a31f9263a42f25b86c8bf41f021ece6a39f27dbff164c0ecf0a2197c338d60ca"} Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.973754 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:33 crc kubenswrapper[4787]: I0219 19:40:33.977168 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f48cb9d5-9e69-4553-b61e-e0bde367ffc7" (UID: "f48cb9d5-9e69-4553-b61e-e0bde367ffc7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.037590 4787 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.037640 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wgzz\" (UniqueName: \"kubernetes.io/projected/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-kube-api-access-6wgzz\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.037654 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.037662 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.037699 4787 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f48cb9d5-9e69-4553-b61e-e0bde367ffc7-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.054950 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qhkfm"] Feb 19 19:40:34 crc kubenswrapper[4787]: W0219 19:40:34.057501 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9057a20e_ff30_454d_8c86_d42f7543571e.slice/crio-f7924efc5d171a8888e886840f66882b7781e1e2cb83a7fe91c248b0d28ffb99 WatchSource:0}: Error finding container f7924efc5d171a8888e886840f66882b7781e1e2cb83a7fe91c248b0d28ffb99: Status 404 returned error can't find the container with id f7924efc5d171a8888e886840f66882b7781e1e2cb83a7fe91c248b0d28ffb99 Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.151667 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n4c8f-config-gbdqx"] Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.163630 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:40:34 crc kubenswrapper[4787]: W0219 19:40:34.172580 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1907cbb2_ac2d_4eb3_b1a8_94e4a893d8b5.slice/crio-cfd6b2cd1ec9a85ebbc685114b38d3cb436492ae03dd10d0ae1bc6437117f7a4 WatchSource:0}: Error finding container cfd6b2cd1ec9a85ebbc685114b38d3cb436492ae03dd10d0ae1bc6437117f7a4: Status 404 returned error can't find the container with id cfd6b2cd1ec9a85ebbc685114b38d3cb436492ae03dd10d0ae1bc6437117f7a4 Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.977233 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b84c5" event={"ID":"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570","Type":"ContainerStarted","Data":"a95101f941e35ce54be29a333b8d0a91280d2cbeb629aa7ee05745eda176161e"} Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.980108 4787 generic.go:334] "Generic (PLEG): container finished" podID="1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" containerID="c0993ea1a2352afb9355a6094f9897074e0eda9fc01f547ad3ac86144eb9370d" exitCode=0 Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.980189 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n4c8f-config-gbdqx" event={"ID":"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5","Type":"ContainerDied","Data":"c0993ea1a2352afb9355a6094f9897074e0eda9fc01f547ad3ac86144eb9370d"} Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.980216 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n4c8f-config-gbdqx" event={"ID":"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5","Type":"ContainerStarted","Data":"cfd6b2cd1ec9a85ebbc685114b38d3cb436492ae03dd10d0ae1bc6437117f7a4"} Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.982208 4787 generic.go:334] "Generic (PLEG): container finished" podID="9057a20e-ff30-454d-8c86-d42f7543571e" containerID="5c63885e26900562745a0667ee283ccce29e2a329a5d04ef8fabbf69f749ed04" exitCode=0 Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.982321 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhkfm" event={"ID":"9057a20e-ff30-454d-8c86-d42f7543571e","Type":"ContainerDied","Data":"5c63885e26900562745a0667ee283ccce29e2a329a5d04ef8fabbf69f749ed04"} Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.982353 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhkfm" event={"ID":"9057a20e-ff30-454d-8c86-d42f7543571e","Type":"ContainerStarted","Data":"f7924efc5d171a8888e886840f66882b7781e1e2cb83a7fe91c248b0d28ffb99"} Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.983792 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fa69714c-41e5-4477-9267-303589d519de","Type":"ContainerStarted","Data":"54803c5b9b46dc569b1a5086febda0fcb02b92226d2d42c2ee2342222be01763"} Feb 19 19:40:34 crc kubenswrapper[4787]: I0219 19:40:34.995934 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-b84c5" podStartSLOduration=3.053591223 podStartE2EDuration="17.995915866s" podCreationTimestamp="2026-02-19 19:40:17 +0000 UTC" firstStartedPulling="2026-02-19 19:40:18.725899898 +0000 UTC m=+1286.516565840" lastFinishedPulling="2026-02-19 19:40:33.668224551 +0000 UTC m=+1301.458890483" observedRunningTime="2026-02-19 19:40:34.988778512 +0000 UTC m=+1302.779444454" watchObservedRunningTime="2026-02-19 19:40:34.995915866 +0000 UTC m=+1302.786581808" Feb 19 19:40:35 crc kubenswrapper[4787]: I0219 19:40:35.994190 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fa69714c-41e5-4477-9267-303589d519de","Type":"ContainerStarted","Data":"7681af018ec623d62996ed62314b2dd958430251d52c68584ba58c84c988b18e"} Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.023653 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=9.692673284 podStartE2EDuration="11.023630552s" podCreationTimestamp="2026-02-19 19:40:25 +0000 UTC" firstStartedPulling="2026-02-19 19:40:34.160116983 +0000 UTC m=+1301.950782925" lastFinishedPulling="2026-02-19 19:40:35.491074251 +0000 UTC m=+1303.281740193" observedRunningTime="2026-02-19 19:40:36.01936245 +0000 UTC m=+1303.810028392" watchObservedRunningTime="2026-02-19 19:40:36.023630552 +0000 UTC m=+1303.814296494" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.473250 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.481745 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608649 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-log-ovn\") pod \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608704 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jwd6\" (UniqueName: \"kubernetes.io/projected/9057a20e-ff30-454d-8c86-d42f7543571e-kube-api-access-2jwd6\") pod \"9057a20e-ff30-454d-8c86-d42f7543571e\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" (UID: "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608872 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-scripts\") pod \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608924 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-additional-scripts\") pod \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608948 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run\") pod \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.608965 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run-ovn\") pod \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609222 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879q6\" (UniqueName: \"kubernetes.io/projected/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-kube-api-access-879q6\") pod \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\" (UID: \"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609234 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run" (OuterVolumeSpecName: "var-run") pod "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" (UID: "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609288 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9057a20e-ff30-454d-8c86-d42f7543571e-operator-scripts\") pod \"9057a20e-ff30-454d-8c86-d42f7543571e\" (UID: \"9057a20e-ff30-454d-8c86-d42f7543571e\") " Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609291 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" (UID: "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609786 4787 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609799 4787 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609807 4787 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609804 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9057a20e-ff30-454d-8c86-d42f7543571e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9057a20e-ff30-454d-8c86-d42f7543571e" (UID: "9057a20e-ff30-454d-8c86-d42f7543571e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.609915 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" (UID: "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.610897 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-scripts" (OuterVolumeSpecName: "scripts") pod "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" (UID: "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.711889 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.711918 4787 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.711928 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9057a20e-ff30-454d-8c86-d42f7543571e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.738950 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9057a20e-ff30-454d-8c86-d42f7543571e-kube-api-access-2jwd6" (OuterVolumeSpecName: "kube-api-access-2jwd6") pod "9057a20e-ff30-454d-8c86-d42f7543571e" (UID: "9057a20e-ff30-454d-8c86-d42f7543571e"). InnerVolumeSpecName "kube-api-access-2jwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.752377 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-kube-api-access-879q6" (OuterVolumeSpecName: "kube-api-access-879q6") pod "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" (UID: "1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5"). InnerVolumeSpecName "kube-api-access-879q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.814749 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-879q6\" (UniqueName: \"kubernetes.io/projected/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5-kube-api-access-879q6\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:36 crc kubenswrapper[4787]: I0219 19:40:36.814780 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jwd6\" (UniqueName: \"kubernetes.io/projected/9057a20e-ff30-454d-8c86-d42f7543571e-kube-api-access-2jwd6\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.065360 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n4c8f-config-gbdqx" event={"ID":"1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5","Type":"ContainerDied","Data":"cfd6b2cd1ec9a85ebbc685114b38d3cb436492ae03dd10d0ae1bc6437117f7a4"} Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.065767 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd6b2cd1ec9a85ebbc685114b38d3cb436492ae03dd10d0ae1bc6437117f7a4" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.065382 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n4c8f-config-gbdqx" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.069995 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qhkfm" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.070160 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qhkfm" event={"ID":"9057a20e-ff30-454d-8c86-d42f7543571e","Type":"ContainerDied","Data":"f7924efc5d171a8888e886840f66882b7781e1e2cb83a7fe91c248b0d28ffb99"} Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.070264 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7924efc5d171a8888e886840f66882b7781e1e2cb83a7fe91c248b0d28ffb99" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.079856 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"659dcc4f-0134-40f4-a6ee-150bb5dee79b","Type":"ContainerStarted","Data":"150197c73f51521be8109eaa6be3373ab4bf0a5f420110057a1637b382ddb914"} Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.442778 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n4c8f" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.603404 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n4c8f-config-gbdqx"] Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.611479 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n4c8f-config-gbdqx"] Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.837000 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:37 crc kubenswrapper[4787]: I0219 19:40:37.844812 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3109e7cb-bd74-40d5-a2ab-deb7a9794d44-etc-swift\") pod \"swift-storage-0\" (UID: \"3109e7cb-bd74-40d5-a2ab-deb7a9794d44\") " pod="openstack/swift-storage-0" Feb 19 19:40:38 crc kubenswrapper[4787]: I0219 19:40:38.093500 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"659dcc4f-0134-40f4-a6ee-150bb5dee79b","Type":"ContainerStarted","Data":"b0b99e5ed4dd61e6099915ebcb2949a96efd578d53180312eae6e33a85f615c2"} Feb 19 19:40:38 crc kubenswrapper[4787]: I0219 19:40:38.116868 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 19:40:38 crc kubenswrapper[4787]: I0219 19:40:38.123070 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.123057575 podStartE2EDuration="17.123057575s" podCreationTimestamp="2026-02-19 19:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:38.1211062 +0000 UTC m=+1305.911772142" watchObservedRunningTime="2026-02-19 19:40:38.123057575 +0000 UTC m=+1305.913723517" Feb 19 19:40:38 crc kubenswrapper[4787]: W0219 19:40:38.699752 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3109e7cb_bd74_40d5_a2ab_deb7a9794d44.slice/crio-368f08755acb904be49a7c21bc9eeb52eccbd4bbf1a70a0e3176ed2c1ed66d36 WatchSource:0}: Error finding container 368f08755acb904be49a7c21bc9eeb52eccbd4bbf1a70a0e3176ed2c1ed66d36: Status 404 returned error can't find the container with id 368f08755acb904be49a7c21bc9eeb52eccbd4bbf1a70a0e3176ed2c1ed66d36 Feb 19 19:40:38 crc kubenswrapper[4787]: I0219 19:40:38.703289 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 19:40:38 crc kubenswrapper[4787]: I0219 19:40:38.903115 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" path="/var/lib/kubelet/pods/1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5/volumes" Feb 19 19:40:38 crc kubenswrapper[4787]: I0219 19:40:38.960809 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.083065 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.103392 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"368f08755acb904be49a7c21bc9eeb52eccbd4bbf1a70a0e3176ed2c1ed66d36"} Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.263196 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.263250 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.263293 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.264059 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c290f7666b81201ba0242964eb17cef06ef0c6b6b9b4a97e80ee9c3f5daac23"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.264123 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://4c290f7666b81201ba0242964eb17cef06ef0c6b6b9b4a97e80ee9c3f5daac23" gracePeriod=600 Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.304260 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 19 19:40:39 crc kubenswrapper[4787]: I0219 19:40:39.447929 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:40:40 crc kubenswrapper[4787]: I0219 19:40:40.117463 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="4c290f7666b81201ba0242964eb17cef06ef0c6b6b9b4a97e80ee9c3f5daac23" exitCode=0 Feb 19 19:40:40 crc kubenswrapper[4787]: I0219 19:40:40.117542 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"4c290f7666b81201ba0242964eb17cef06ef0c6b6b9b4a97e80ee9c3f5daac23"} Feb 19 19:40:40 crc kubenswrapper[4787]: I0219 19:40:40.117962 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc"} Feb 19 19:40:40 crc kubenswrapper[4787]: I0219 19:40:40.117984 4787 scope.go:117] "RemoveContainer" containerID="4705fa8f568a3ef2f81b22b29b62816c4b2d13e8cd966ddadc7147e1265dbf66" Feb 19 19:40:41 crc kubenswrapper[4787]: I0219 19:40:41.769047 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"36d255159c41c0e79f0a09a01519f22b57ce64532eba126a9552da1bf3bbb62d"} Feb 19 19:40:41 crc kubenswrapper[4787]: I0219 19:40:41.769527 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"971fa30aa47ed4a7f74bc9d9bd25eb9dddc795ca2114a33c75c1c52862c17a7e"} Feb 19 19:40:42 crc kubenswrapper[4787]: I0219 19:40:42.260404 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:42 crc kubenswrapper[4787]: I0219 19:40:42.781099 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"4f1ccc1e847af284af234d81daa7f9f5e2b763c94b3411b378108b81082e9c5d"} Feb 19 19:40:42 crc kubenswrapper[4787]: I0219 19:40:42.781380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"7f729ab01eb0448b610f2db8ba86d232c1fbcc2f0d4bbeb30cb083ec3f02403b"} Feb 19 19:40:42 crc kubenswrapper[4787]: I0219 19:40:42.783278 4787 generic.go:334] "Generic (PLEG): container finished" podID="b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" containerID="a95101f941e35ce54be29a333b8d0a91280d2cbeb629aa7ee05745eda176161e" exitCode=0 Feb 19 19:40:42 crc kubenswrapper[4787]: I0219 19:40:42.783308 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b84c5" event={"ID":"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570","Type":"ContainerDied","Data":"a95101f941e35ce54be29a333b8d0a91280d2cbeb629aa7ee05745eda176161e"} Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.677429 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-v8g95"] Feb 19 19:40:43 crc kubenswrapper[4787]: E0219 19:40:43.678164 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9057a20e-ff30-454d-8c86-d42f7543571e" containerName="mariadb-account-create-update" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.678186 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9057a20e-ff30-454d-8c86-d42f7543571e" containerName="mariadb-account-create-update" Feb 19 19:40:43 crc kubenswrapper[4787]: E0219 19:40:43.678210 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" containerName="ovn-config" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.678218 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" containerName="ovn-config" Feb 19 19:40:43 crc kubenswrapper[4787]: E0219 19:40:43.678231 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48cb9d5-9e69-4553-b61e-e0bde367ffc7" containerName="swift-ring-rebalance" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.678240 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48cb9d5-9e69-4553-b61e-e0bde367ffc7" containerName="swift-ring-rebalance" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.678470 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48cb9d5-9e69-4553-b61e-e0bde367ffc7" containerName="swift-ring-rebalance" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.678496 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1907cbb2-ac2d-4eb3-b1a8-94e4a893d8b5" containerName="ovn-config" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.678514 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9057a20e-ff30-454d-8c86-d42f7543571e" containerName="mariadb-account-create-update" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.679428 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.704659 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v8g95"] Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.768133 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-57ea-account-create-update-tvq42"] Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.771119 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.773725 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.787669 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-57ea-account-create-update-tvq42"] Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.804093 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"ec5f81ed8bae134bdceff704db40f3c8ad75f36f5fca28ee3c7b682f3b8d10f6"} Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.865377 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-8z4ns"] Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.866968 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.902474 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7747ed1a-72b3-4273-baf1-f34f1ab95760-operator-scripts\") pod \"heat-57ea-account-create-update-tvq42\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.902873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6c7de8-26a7-41ce-a452-6d392be91fe6-operator-scripts\") pod \"cinder-db-create-v8g95\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.902930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b660e059-e520-4aa9-898b-28346b096b31-operator-scripts\") pod \"heat-db-create-8z4ns\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.902995 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76v66\" (UniqueName: \"kubernetes.io/projected/b660e059-e520-4aa9-898b-28346b096b31-kube-api-access-76v66\") pod \"heat-db-create-8z4ns\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.903074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb9jd\" (UniqueName: \"kubernetes.io/projected/6f6c7de8-26a7-41ce-a452-6d392be91fe6-kube-api-access-fb9jd\") pod \"cinder-db-create-v8g95\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.903190 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msrjh\" (UniqueName: \"kubernetes.io/projected/7747ed1a-72b3-4273-baf1-f34f1ab95760-kube-api-access-msrjh\") pod \"heat-57ea-account-create-update-tvq42\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.927532 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ca57-account-create-update-6vtmg"] Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.929147 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.934261 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.946745 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-8z4ns"] Feb 19 19:40:43 crc kubenswrapper[4787]: I0219 19:40:43.965019 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ca57-account-create-update-6vtmg"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.007057 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6c7de8-26a7-41ce-a452-6d392be91fe6-operator-scripts\") pod \"cinder-db-create-v8g95\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.008224 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b660e059-e520-4aa9-898b-28346b096b31-operator-scripts\") pod \"heat-db-create-8z4ns\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.008386 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv84l\" (UniqueName: \"kubernetes.io/projected/afce4d4f-f308-4581-be34-e782d95c89f3-kube-api-access-lv84l\") pod \"cinder-ca57-account-create-update-6vtmg\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.008486 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76v66\" (UniqueName: \"kubernetes.io/projected/b660e059-e520-4aa9-898b-28346b096b31-kube-api-access-76v66\") pod \"heat-db-create-8z4ns\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.008664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb9jd\" (UniqueName: \"kubernetes.io/projected/6f6c7de8-26a7-41ce-a452-6d392be91fe6-kube-api-access-fb9jd\") pod \"cinder-db-create-v8g95\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.009262 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6c7de8-26a7-41ce-a452-6d392be91fe6-operator-scripts\") pod \"cinder-db-create-v8g95\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.012702 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msrjh\" (UniqueName: \"kubernetes.io/projected/7747ed1a-72b3-4273-baf1-f34f1ab95760-kube-api-access-msrjh\") pod \"heat-57ea-account-create-update-tvq42\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.012930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afce4d4f-f308-4581-be34-e782d95c89f3-operator-scripts\") pod \"cinder-ca57-account-create-update-6vtmg\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.015388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7747ed1a-72b3-4273-baf1-f34f1ab95760-operator-scripts\") pod \"heat-57ea-account-create-update-tvq42\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.021747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b660e059-e520-4aa9-898b-28346b096b31-operator-scripts\") pod \"heat-db-create-8z4ns\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.022313 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7747ed1a-72b3-4273-baf1-f34f1ab95760-operator-scripts\") pod \"heat-57ea-account-create-update-tvq42\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.069800 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-045b-account-create-update-nntlf"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.071749 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.073122 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb9jd\" (UniqueName: \"kubernetes.io/projected/6f6c7de8-26a7-41ce-a452-6d392be91fe6-kube-api-access-fb9jd\") pod \"cinder-db-create-v8g95\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.073739 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76v66\" (UniqueName: \"kubernetes.io/projected/b660e059-e520-4aa9-898b-28346b096b31-kube-api-access-76v66\") pod \"heat-db-create-8z4ns\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.074075 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.092129 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-sq5bc"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.093722 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.097351 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msrjh\" (UniqueName: \"kubernetes.io/projected/7747ed1a-72b3-4273-baf1-f34f1ab95760-kube-api-access-msrjh\") pod \"heat-57ea-account-create-update-tvq42\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.099504 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.103834 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-045b-account-create-update-nntlf"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.131502 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afce4d4f-f308-4581-be34-e782d95c89f3-operator-scripts\") pod \"cinder-ca57-account-create-update-6vtmg\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.131862 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c292df44-db14-4c45-8bd5-a0bd2da5e92f-operator-scripts\") pod \"neutron-db-create-sq5bc\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.132043 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv84l\" (UniqueName: \"kubernetes.io/projected/afce4d4f-f308-4581-be34-e782d95c89f3-kube-api-access-lv84l\") pod \"cinder-ca57-account-create-update-6vtmg\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.132361 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33472d78-2ff8-4741-bfa6-c85d46fa60ae-operator-scripts\") pod \"neutron-045b-account-create-update-nntlf\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.132513 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8pg\" (UniqueName: \"kubernetes.io/projected/33472d78-2ff8-4741-bfa6-c85d46fa60ae-kube-api-access-mq8pg\") pod \"neutron-045b-account-create-update-nntlf\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.132684 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll42h\" (UniqueName: \"kubernetes.io/projected/c292df44-db14-4c45-8bd5-a0bd2da5e92f-kube-api-access-ll42h\") pod \"neutron-db-create-sq5bc\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.132911 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afce4d4f-f308-4581-be34-e782d95c89f3-operator-scripts\") pod \"cinder-ca57-account-create-update-6vtmg\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.132189 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-sq5bc"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.169822 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv84l\" (UniqueName: \"kubernetes.io/projected/afce4d4f-f308-4581-be34-e782d95c89f3-kube-api-access-lv84l\") pod \"cinder-ca57-account-create-update-6vtmg\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.185326 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mpb5r"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.191958 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.201422 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-870d-account-create-update-pdvj4"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.203258 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.208750 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.214564 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.221362 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mpb5r"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.225448 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-870d-account-create-update-pdvj4"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.250809 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-operator-scripts\") pod \"barbican-870d-account-create-update-pdvj4\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg5kk\" (UniqueName: \"kubernetes.io/projected/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-kube-api-access-qg5kk\") pod \"barbican-870d-account-create-update-pdvj4\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252388 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c292df44-db14-4c45-8bd5-a0bd2da5e92f-operator-scripts\") pod \"neutron-db-create-sq5bc\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252441 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33472d78-2ff8-4741-bfa6-c85d46fa60ae-operator-scripts\") pod \"neutron-045b-account-create-update-nntlf\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq8pg\" (UniqueName: \"kubernetes.io/projected/33472d78-2ff8-4741-bfa6-c85d46fa60ae-kube-api-access-mq8pg\") pod \"neutron-045b-account-create-update-nntlf\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252500 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxvm\" (UniqueName: \"kubernetes.io/projected/58c92c68-c226-47e8-b01a-3946123ed402-kube-api-access-hnxvm\") pod \"barbican-db-create-mpb5r\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252529 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c92c68-c226-47e8-b01a-3946123ed402-operator-scripts\") pod \"barbican-db-create-mpb5r\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.252558 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll42h\" (UniqueName: \"kubernetes.io/projected/c292df44-db14-4c45-8bd5-a0bd2da5e92f-kube-api-access-ll42h\") pod \"neutron-db-create-sq5bc\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.254683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33472d78-2ff8-4741-bfa6-c85d46fa60ae-operator-scripts\") pod \"neutron-045b-account-create-update-nntlf\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.261744 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c292df44-db14-4c45-8bd5-a0bd2da5e92f-operator-scripts\") pod \"neutron-db-create-sq5bc\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.276316 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4nk2w"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.280719 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.285261 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.285688 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.285889 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b8sqz" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.286002 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.287823 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll42h\" (UniqueName: \"kubernetes.io/projected/c292df44-db14-4c45-8bd5-a0bd2da5e92f-kube-api-access-ll42h\") pod \"neutron-db-create-sq5bc\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.288697 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4nk2w"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.290791 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq8pg\" (UniqueName: \"kubernetes.io/projected/33472d78-2ff8-4741-bfa6-c85d46fa60ae-kube-api-access-mq8pg\") pod \"neutron-045b-account-create-update-nntlf\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.297006 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.345474 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.347729 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.354081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-operator-scripts\") pod \"barbican-870d-account-create-update-pdvj4\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.354158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg5kk\" (UniqueName: \"kubernetes.io/projected/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-kube-api-access-qg5kk\") pod \"barbican-870d-account-create-update-pdvj4\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.354259 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnxvm\" (UniqueName: \"kubernetes.io/projected/58c92c68-c226-47e8-b01a-3946123ed402-kube-api-access-hnxvm\") pod \"barbican-db-create-mpb5r\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.354296 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c92c68-c226-47e8-b01a-3946123ed402-operator-scripts\") pod \"barbican-db-create-mpb5r\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.355804 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-operator-scripts\") pod \"barbican-870d-account-create-update-pdvj4\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.356554 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c92c68-c226-47e8-b01a-3946123ed402-operator-scripts\") pod \"barbican-db-create-mpb5r\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.392320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg5kk\" (UniqueName: \"kubernetes.io/projected/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-kube-api-access-qg5kk\") pod \"barbican-870d-account-create-update-pdvj4\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.398532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnxvm\" (UniqueName: \"kubernetes.io/projected/58c92c68-c226-47e8-b01a-3946123ed402-kube-api-access-hnxvm\") pod \"barbican-db-create-mpb5r\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.456668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zmvn\" (UniqueName: \"kubernetes.io/projected/3fced1c8-edcc-41a6-a703-3bde87073a5f-kube-api-access-5zmvn\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.456813 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-combined-ca-bundle\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.456930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-config-data\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.559084 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-combined-ca-bundle\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.559700 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-config-data\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.559795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zmvn\" (UniqueName: \"kubernetes.io/projected/3fced1c8-edcc-41a6-a703-3bde87073a5f-kube-api-access-5zmvn\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.629057 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.656601 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.668774 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-combined-ca-bundle\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.668889 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-config-data\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.669712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zmvn\" (UniqueName: \"kubernetes.io/projected/3fced1c8-edcc-41a6-a703-3bde87073a5f-kube-api-access-5zmvn\") pod \"keystone-db-sync-4nk2w\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.693012 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.693435 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.863930 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-combined-ca-bundle\") pod \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.864373 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzqw\" (UniqueName: \"kubernetes.io/projected/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-kube-api-access-6fzqw\") pod \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.864501 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-config-data\") pod \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.864562 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-db-sync-config-data\") pod \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\" (UID: \"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570\") " Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.867331 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b84c5" event={"ID":"b1b3acff-ad59-4a99-9d0c-bf1a5f94b570","Type":"ContainerDied","Data":"0cc7224a69e35b5b24f879b2843d0dcfa03ea4000652ead0d702b7ed7a8f64d6"} Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.867381 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cc7224a69e35b5b24f879b2843d0dcfa03ea4000652ead0d702b7ed7a8f64d6" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.867480 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b84c5" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.878902 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-kube-api-access-6fzqw" (OuterVolumeSpecName: "kube-api-access-6fzqw") pod "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" (UID: "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570"). InnerVolumeSpecName "kube-api-access-6fzqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.885555 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"f4c0009d69ba75c6bb5dd38be017546e5f4122b8cce42496f0c0edba3120f935"} Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.885597 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"fba1f4003174dc072090d55544168116af77de624c9a2e526eb85cc4bcf37b15"} Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.886100 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" (UID: "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.923898 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" (UID: "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.935054 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-57ea-account-create-update-tvq42"] Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.972832 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.972870 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:44 crc kubenswrapper[4787]: I0219 19:40:44.972880 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzqw\" (UniqueName: \"kubernetes.io/projected/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-kube-api-access-6fzqw\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.037566 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-config-data" (OuterVolumeSpecName: "config-data") pod "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" (UID: "b1b3acff-ad59-4a99-9d0c-bf1a5f94b570"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.078748 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.105998 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ca57-account-create-update-6vtmg"] Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.259393 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-045b-account-create-update-nntlf"] Feb 19 19:40:45 crc kubenswrapper[4787]: W0219 19:40:45.291781 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb660e059_e520_4aa9_898b_28346b096b31.slice/crio-e7f0e05bd094a4a18f296ee8a26db4f8b6c099ed4bfa066faa793f4ef3f5a1cc WatchSource:0}: Error finding container e7f0e05bd094a4a18f296ee8a26db4f8b6c099ed4bfa066faa793f4ef3f5a1cc: Status 404 returned error can't find the container with id e7f0e05bd094a4a18f296ee8a26db4f8b6c099ed4bfa066faa793f4ef3f5a1cc Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.292675 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-8z4ns"] Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.317367 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qxzhm"] Feb 19 19:40:45 crc kubenswrapper[4787]: E0219 19:40:45.317849 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" containerName="glance-db-sync" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.317861 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" containerName="glance-db-sync" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.318054 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" containerName="glance-db-sync" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.319297 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.348126 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-sq5bc"] Feb 19 19:40:45 crc kubenswrapper[4787]: W0219 19:40:45.352501 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc292df44_db14_4c45_8bd5_a0bd2da5e92f.slice/crio-35c17e73be6a30ffd56d7e88137d36d8378afb238f55f5c099fd8e3c64e575f2 WatchSource:0}: Error finding container 35c17e73be6a30ffd56d7e88137d36d8378afb238f55f5c099fd8e3c64e575f2: Status 404 returned error can't find the container with id 35c17e73be6a30ffd56d7e88137d36d8378afb238f55f5c099fd8e3c64e575f2 Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.358960 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qxzhm"] Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.492178 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.492732 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.492808 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh2l\" (UniqueName: \"kubernetes.io/projected/febd637e-6511-4aba-add1-8c52808c1f2e-kube-api-access-7zh2l\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.492891 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.493033 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-config\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.538387 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-v8g95"] Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.595902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.595971 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.596005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zh2l\" (UniqueName: \"kubernetes.io/projected/febd637e-6511-4aba-add1-8c52808c1f2e-kube-api-access-7zh2l\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.596072 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.596208 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-config\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.597692 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.597987 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-config\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.598241 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.598804 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.643670 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zh2l\" (UniqueName: \"kubernetes.io/projected/febd637e-6511-4aba-add1-8c52808c1f2e-kube-api-access-7zh2l\") pod \"dnsmasq-dns-5b946c75cc-qxzhm\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.700138 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.920049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"c988e1171ea7d181b2c997c962dffcea6ab4f48b9417377522837d636efaf7b2"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.921899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-sq5bc" event={"ID":"c292df44-db14-4c45-8bd5-a0bd2da5e92f","Type":"ContainerStarted","Data":"35c17e73be6a30ffd56d7e88137d36d8378afb238f55f5c099fd8e3c64e575f2"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.926983 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8z4ns" event={"ID":"b660e059-e520-4aa9-898b-28346b096b31","Type":"ContainerStarted","Data":"1192d4fd488d23b4bfab3ae1cfcfdce4076e77fce5686d2eb40581fd54134742"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.927310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8z4ns" event={"ID":"b660e059-e520-4aa9-898b-28346b096b31","Type":"ContainerStarted","Data":"e7f0e05bd094a4a18f296ee8a26db4f8b6c099ed4bfa066faa793f4ef3f5a1cc"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.929265 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-57ea-account-create-update-tvq42" event={"ID":"7747ed1a-72b3-4273-baf1-f34f1ab95760","Type":"ContainerStarted","Data":"7b1e3469562215544ae0097304081ccd61ff6c619113732310803bdc2375541c"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.929330 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-57ea-account-create-update-tvq42" event={"ID":"7747ed1a-72b3-4273-baf1-f34f1ab95760","Type":"ContainerStarted","Data":"553a297cb38543f91343aef43a9280b29884ed6703266c6455a821f8cfb7ff28"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.931155 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v8g95" event={"ID":"6f6c7de8-26a7-41ce-a452-6d392be91fe6","Type":"ContainerStarted","Data":"0101b9c0e68743f3cf7648945103721d2ebf4966b31e00ba9015f329f3c726b1"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.932669 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-045b-account-create-update-nntlf" event={"ID":"33472d78-2ff8-4741-bfa6-c85d46fa60ae","Type":"ContainerStarted","Data":"48c2d6b9f5b18fcbc64df3b76adcf82a4aa359f70e2b2f4cb8b49ce7710e9143"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.932699 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-045b-account-create-update-nntlf" event={"ID":"33472d78-2ff8-4741-bfa6-c85d46fa60ae","Type":"ContainerStarted","Data":"29dcdc8841ff2a572898a2ce15128ad22f23a07f3aa23d38c825cc6fe228ba1f"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.953226 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca57-account-create-update-6vtmg" event={"ID":"afce4d4f-f308-4581-be34-e782d95c89f3","Type":"ContainerStarted","Data":"59934b945fa6636b78af9a63d1847ebf90d7b3c4c8ca0d593840a0e445b82055"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.953279 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca57-account-create-update-6vtmg" event={"ID":"afce4d4f-f308-4581-be34-e782d95c89f3","Type":"ContainerStarted","Data":"569b1e3631678ffd59624e01ff7185303e1d647e56ae8090d7251e6ed41f109d"} Feb 19 19:40:45 crc kubenswrapper[4787]: I0219 19:40:45.978129 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-8z4ns" podStartSLOduration=2.9781027030000002 podStartE2EDuration="2.978102703s" podCreationTimestamp="2026-02-19 19:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:45.952442533 +0000 UTC m=+1313.743108475" watchObservedRunningTime="2026-02-19 19:40:45.978102703 +0000 UTC m=+1313.768768645" Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.004040 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-57ea-account-create-update-tvq42" podStartSLOduration=3.004007911 podStartE2EDuration="3.004007911s" podCreationTimestamp="2026-02-19 19:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:45.98115754 +0000 UTC m=+1313.771823482" watchObservedRunningTime="2026-02-19 19:40:46.004007911 +0000 UTC m=+1313.794673843" Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.035786 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-045b-account-create-update-nntlf" podStartSLOduration=2.035747464 podStartE2EDuration="2.035747464s" podCreationTimestamp="2026-02-19 19:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:46.010071793 +0000 UTC m=+1313.800737735" watchObservedRunningTime="2026-02-19 19:40:46.035747464 +0000 UTC m=+1313.826413406" Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.153070 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ca57-account-create-update-6vtmg" podStartSLOduration=3.153047163 podStartE2EDuration="3.153047163s" podCreationTimestamp="2026-02-19 19:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:46.03839837 +0000 UTC m=+1313.829064312" watchObservedRunningTime="2026-02-19 19:40:46.153047163 +0000 UTC m=+1313.943713105" Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.170523 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-870d-account-create-update-pdvj4"] Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.175227 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4nk2w"] Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.193755 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mpb5r"] Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.582791 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qxzhm"] Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.964846 4787 generic.go:334] "Generic (PLEG): container finished" podID="33472d78-2ff8-4741-bfa6-c85d46fa60ae" containerID="48c2d6b9f5b18fcbc64df3b76adcf82a4aa359f70e2b2f4cb8b49ce7710e9143" exitCode=0 Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.964981 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-045b-account-create-update-nntlf" event={"ID":"33472d78-2ff8-4741-bfa6-c85d46fa60ae","Type":"ContainerDied","Data":"48c2d6b9f5b18fcbc64df3b76adcf82a4aa359f70e2b2f4cb8b49ce7710e9143"} Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.968319 4787 generic.go:334] "Generic (PLEG): container finished" podID="58c92c68-c226-47e8-b01a-3946123ed402" containerID="d7d318e32b475778800a7c552987b9dc4e743167f2f27dee8573440d8c16907b" exitCode=0 Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.968391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mpb5r" event={"ID":"58c92c68-c226-47e8-b01a-3946123ed402","Type":"ContainerDied","Data":"d7d318e32b475778800a7c552987b9dc4e743167f2f27dee8573440d8c16907b"} Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.968432 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mpb5r" event={"ID":"58c92c68-c226-47e8-b01a-3946123ed402","Type":"ContainerStarted","Data":"846ccef482831f9766de80c7152878a1396ed92f394848ad6d36fd589f7f677a"} Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.971779 4787 generic.go:334] "Generic (PLEG): container finished" podID="c292df44-db14-4c45-8bd5-a0bd2da5e92f" containerID="54b6aa1c70b995b9d69eeb780a78f5194893cbdcc04fb0d0b4d5a6a1c88345e4" exitCode=0 Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.971860 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-sq5bc" event={"ID":"c292df44-db14-4c45-8bd5-a0bd2da5e92f","Type":"ContainerDied","Data":"54b6aa1c70b995b9d69eeb780a78f5194893cbdcc04fb0d0b4d5a6a1c88345e4"} Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.978557 4787 generic.go:334] "Generic (PLEG): container finished" podID="b660e059-e520-4aa9-898b-28346b096b31" containerID="1192d4fd488d23b4bfab3ae1cfcfdce4076e77fce5686d2eb40581fd54134742" exitCode=0 Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.978711 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8z4ns" event={"ID":"b660e059-e520-4aa9-898b-28346b096b31","Type":"ContainerDied","Data":"1192d4fd488d23b4bfab3ae1cfcfdce4076e77fce5686d2eb40581fd54134742"} Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.982141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" event={"ID":"febd637e-6511-4aba-add1-8c52808c1f2e","Type":"ContainerStarted","Data":"e43de0854c98aa96ecb3e0a2c0c56200fe095092b0c87536aeea773e8b4ca7fa"} Feb 19 19:40:46 crc kubenswrapper[4787]: I0219 19:40:46.994691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4nk2w" event={"ID":"3fced1c8-edcc-41a6-a703-3bde87073a5f","Type":"ContainerStarted","Data":"853f6eab993bafef9564fa7cf1f74d433ae42089ecf70a421737f8700c080717"} Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.001225 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-870d-account-create-update-pdvj4" event={"ID":"b49278e3-c0b6-4bb6-ab6f-49386ea52e68","Type":"ContainerStarted","Data":"b717cb85ef34c4a64d1fcb71ca9df3beb560457b3bc948f32e40f62ef259adba"} Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.001271 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-870d-account-create-update-pdvj4" event={"ID":"b49278e3-c0b6-4bb6-ab6f-49386ea52e68","Type":"ContainerStarted","Data":"f33acc3a8ca12b3c720319e07b1d7221b962e3ca30a8e5294328ce20c93f85d4"} Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.003668 4787 generic.go:334] "Generic (PLEG): container finished" podID="afce4d4f-f308-4581-be34-e782d95c89f3" containerID="59934b945fa6636b78af9a63d1847ebf90d7b3c4c8ca0d593840a0e445b82055" exitCode=0 Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.003719 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca57-account-create-update-6vtmg" event={"ID":"afce4d4f-f308-4581-be34-e782d95c89f3","Type":"ContainerDied","Data":"59934b945fa6636b78af9a63d1847ebf90d7b3c4c8ca0d593840a0e445b82055"} Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.004767 4787 generic.go:334] "Generic (PLEG): container finished" podID="7747ed1a-72b3-4273-baf1-f34f1ab95760" containerID="7b1e3469562215544ae0097304081ccd61ff6c619113732310803bdc2375541c" exitCode=0 Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.004809 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-57ea-account-create-update-tvq42" event={"ID":"7747ed1a-72b3-4273-baf1-f34f1ab95760","Type":"ContainerDied","Data":"7b1e3469562215544ae0097304081ccd61ff6c619113732310803bdc2375541c"} Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.006149 4787 generic.go:334] "Generic (PLEG): container finished" podID="6f6c7de8-26a7-41ce-a452-6d392be91fe6" containerID="d3423e1226e06f4573726ab8ca612b6c0235f68c2f0a34f1ae0403bb72f78bfb" exitCode=0 Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.006179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v8g95" event={"ID":"6f6c7de8-26a7-41ce-a452-6d392be91fe6","Type":"ContainerDied","Data":"d3423e1226e06f4573726ab8ca612b6c0235f68c2f0a34f1ae0403bb72f78bfb"} Feb 19 19:40:47 crc kubenswrapper[4787]: I0219 19:40:47.094199 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-870d-account-create-update-pdvj4" podStartSLOduration=3.094175025 podStartE2EDuration="3.094175025s" podCreationTimestamp="2026-02-19 19:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:47.083409578 +0000 UTC m=+1314.874075530" watchObservedRunningTime="2026-02-19 19:40:47.094175025 +0000 UTC m=+1314.884840967" Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.032827 4787 generic.go:334] "Generic (PLEG): container finished" podID="b49278e3-c0b6-4bb6-ab6f-49386ea52e68" containerID="b717cb85ef34c4a64d1fcb71ca9df3beb560457b3bc948f32e40f62ef259adba" exitCode=0 Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.034019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-870d-account-create-update-pdvj4" event={"ID":"b49278e3-c0b6-4bb6-ab6f-49386ea52e68","Type":"ContainerDied","Data":"b717cb85ef34c4a64d1fcb71ca9df3beb560457b3bc948f32e40f62ef259adba"} Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.055433 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"e5520c41d0386842281aca9fe6e3ec1281918edaaf58a4dba45fe8fff5dfbfae"} Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.055478 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"ae427b448ea040911edb9e5a18d7552f0def46e73cc8b34b2ea73f4d919ebae6"} Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.055487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"5481dc25d598f9b88dd7206e2cb92eddbcda11e2e1ee9ca1f9349f9a6b824e31"} Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.061288 4787 generic.go:334] "Generic (PLEG): container finished" podID="febd637e-6511-4aba-add1-8c52808c1f2e" containerID="063691f2d14f85e66441e9223f681d293cbdf64a123d7eb5e30163009215c34f" exitCode=0 Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.063859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" event={"ID":"febd637e-6511-4aba-add1-8c52808c1f2e","Type":"ContainerDied","Data":"063691f2d14f85e66441e9223f681d293cbdf64a123d7eb5e30163009215c34f"} Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.747920 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.877543 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b660e059-e520-4aa9-898b-28346b096b31-operator-scripts\") pod \"b660e059-e520-4aa9-898b-28346b096b31\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.877998 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76v66\" (UniqueName: \"kubernetes.io/projected/b660e059-e520-4aa9-898b-28346b096b31-kube-api-access-76v66\") pod \"b660e059-e520-4aa9-898b-28346b096b31\" (UID: \"b660e059-e520-4aa9-898b-28346b096b31\") " Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.878378 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b660e059-e520-4aa9-898b-28346b096b31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b660e059-e520-4aa9-898b-28346b096b31" (UID: "b660e059-e520-4aa9-898b-28346b096b31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.878866 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b660e059-e520-4aa9-898b-28346b096b31-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.902078 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b660e059-e520-4aa9-898b-28346b096b31-kube-api-access-76v66" (OuterVolumeSpecName: "kube-api-access-76v66") pod "b660e059-e520-4aa9-898b-28346b096b31" (UID: "b660e059-e520-4aa9-898b-28346b096b31"). InnerVolumeSpecName "kube-api-access-76v66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:48 crc kubenswrapper[4787]: I0219 19:40:48.981829 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76v66\" (UniqueName: \"kubernetes.io/projected/b660e059-e520-4aa9-898b-28346b096b31-kube-api-access-76v66\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.201223 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"a9d8805233de41313656a3cb19d2941f786fcb8f1e0229e5db70426b33554b12"} Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.201548 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"c2daefd23122bf311e201063df9bc4d6441da57601f8d6050272a431f45a29cb"} Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.201559 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"e3cd01fbfa38c6a23b56f50ea4fa871495b119accf714cec0283ba41925991c1"} Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.222825 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-8z4ns" event={"ID":"b660e059-e520-4aa9-898b-28346b096b31","Type":"ContainerDied","Data":"e7f0e05bd094a4a18f296ee8a26db4f8b6c099ed4bfa066faa793f4ef3f5a1cc"} Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.222861 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f0e05bd094a4a18f296ee8a26db4f8b6c099ed4bfa066faa793f4ef3f5a1cc" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.222938 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-8z4ns" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.238015 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" event={"ID":"febd637e-6511-4aba-add1-8c52808c1f2e","Type":"ContainerStarted","Data":"f753ade8cce6a371eced07045d011d81cacdf6d3741dcc405940ae6076ee63d9"} Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.238091 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.316081 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" podStartSLOduration=4.316060984 podStartE2EDuration="4.316060984s" podCreationTimestamp="2026-02-19 19:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:49.286235235 +0000 UTC m=+1317.076901177" watchObservedRunningTime="2026-02-19 19:40:49.316060984 +0000 UTC m=+1317.106726926" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.450944 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.462855 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.474878 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.503375 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.509499 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.534955 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607276 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c292df44-db14-4c45-8bd5-a0bd2da5e92f-operator-scripts\") pod \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607581 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll42h\" (UniqueName: \"kubernetes.io/projected/c292df44-db14-4c45-8bd5-a0bd2da5e92f-kube-api-access-ll42h\") pod \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\" (UID: \"c292df44-db14-4c45-8bd5-a0bd2da5e92f\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607637 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afce4d4f-f308-4581-be34-e782d95c89f3-operator-scripts\") pod \"afce4d4f-f308-4581-be34-e782d95c89f3\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607732 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6c7de8-26a7-41ce-a452-6d392be91fe6-operator-scripts\") pod \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607800 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msrjh\" (UniqueName: \"kubernetes.io/projected/7747ed1a-72b3-4273-baf1-f34f1ab95760-kube-api-access-msrjh\") pod \"7747ed1a-72b3-4273-baf1-f34f1ab95760\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607811 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c292df44-db14-4c45-8bd5-a0bd2da5e92f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c292df44-db14-4c45-8bd5-a0bd2da5e92f" (UID: "c292df44-db14-4c45-8bd5-a0bd2da5e92f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.607828 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv84l\" (UniqueName: \"kubernetes.io/projected/afce4d4f-f308-4581-be34-e782d95c89f3-kube-api-access-lv84l\") pod \"afce4d4f-f308-4581-be34-e782d95c89f3\" (UID: \"afce4d4f-f308-4581-be34-e782d95c89f3\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608006 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb9jd\" (UniqueName: \"kubernetes.io/projected/6f6c7de8-26a7-41ce-a452-6d392be91fe6-kube-api-access-fb9jd\") pod \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\" (UID: \"6f6c7de8-26a7-41ce-a452-6d392be91fe6\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608146 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c92c68-c226-47e8-b01a-3946123ed402-operator-scripts\") pod \"58c92c68-c226-47e8-b01a-3946123ed402\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7747ed1a-72b3-4273-baf1-f34f1ab95760-operator-scripts\") pod \"7747ed1a-72b3-4273-baf1-f34f1ab95760\" (UID: \"7747ed1a-72b3-4273-baf1-f34f1ab95760\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608209 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnxvm\" (UniqueName: \"kubernetes.io/projected/58c92c68-c226-47e8-b01a-3946123ed402-kube-api-access-hnxvm\") pod \"58c92c68-c226-47e8-b01a-3946123ed402\" (UID: \"58c92c68-c226-47e8-b01a-3946123ed402\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608209 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6c7de8-26a7-41ce-a452-6d392be91fe6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f6c7de8-26a7-41ce-a452-6d392be91fe6" (UID: "6f6c7de8-26a7-41ce-a452-6d392be91fe6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608285 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afce4d4f-f308-4581-be34-e782d95c89f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afce4d4f-f308-4581-be34-e782d95c89f3" (UID: "afce4d4f-f308-4581-be34-e782d95c89f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.608471 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58c92c68-c226-47e8-b01a-3946123ed402-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58c92c68-c226-47e8-b01a-3946123ed402" (UID: "58c92c68-c226-47e8-b01a-3946123ed402"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.609086 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6c7de8-26a7-41ce-a452-6d392be91fe6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.609100 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c92c68-c226-47e8-b01a-3946123ed402-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.609108 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c292df44-db14-4c45-8bd5-a0bd2da5e92f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.609118 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afce4d4f-f308-4581-be34-e782d95c89f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.609520 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7747ed1a-72b3-4273-baf1-f34f1ab95760-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7747ed1a-72b3-4273-baf1-f34f1ab95760" (UID: "7747ed1a-72b3-4273-baf1-f34f1ab95760"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.615710 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c92c68-c226-47e8-b01a-3946123ed402-kube-api-access-hnxvm" (OuterVolumeSpecName: "kube-api-access-hnxvm") pod "58c92c68-c226-47e8-b01a-3946123ed402" (UID: "58c92c68-c226-47e8-b01a-3946123ed402"). InnerVolumeSpecName "kube-api-access-hnxvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.615861 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7747ed1a-72b3-4273-baf1-f34f1ab95760-kube-api-access-msrjh" (OuterVolumeSpecName: "kube-api-access-msrjh") pod "7747ed1a-72b3-4273-baf1-f34f1ab95760" (UID: "7747ed1a-72b3-4273-baf1-f34f1ab95760"). InnerVolumeSpecName "kube-api-access-msrjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.619044 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6c7de8-26a7-41ce-a452-6d392be91fe6-kube-api-access-fb9jd" (OuterVolumeSpecName: "kube-api-access-fb9jd") pod "6f6c7de8-26a7-41ce-a452-6d392be91fe6" (UID: "6f6c7de8-26a7-41ce-a452-6d392be91fe6"). InnerVolumeSpecName "kube-api-access-fb9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.619126 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c292df44-db14-4c45-8bd5-a0bd2da5e92f-kube-api-access-ll42h" (OuterVolumeSpecName: "kube-api-access-ll42h") pod "c292df44-db14-4c45-8bd5-a0bd2da5e92f" (UID: "c292df44-db14-4c45-8bd5-a0bd2da5e92f"). InnerVolumeSpecName "kube-api-access-ll42h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.619152 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afce4d4f-f308-4581-be34-e782d95c89f3-kube-api-access-lv84l" (OuterVolumeSpecName: "kube-api-access-lv84l") pod "afce4d4f-f308-4581-be34-e782d95c89f3" (UID: "afce4d4f-f308-4581-be34-e782d95c89f3"). InnerVolumeSpecName "kube-api-access-lv84l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.710246 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33472d78-2ff8-4741-bfa6-c85d46fa60ae-operator-scripts\") pod \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.710392 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq8pg\" (UniqueName: \"kubernetes.io/projected/33472d78-2ff8-4741-bfa6-c85d46fa60ae-kube-api-access-mq8pg\") pod \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\" (UID: \"33472d78-2ff8-4741-bfa6-c85d46fa60ae\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.711890 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msrjh\" (UniqueName: \"kubernetes.io/projected/7747ed1a-72b3-4273-baf1-f34f1ab95760-kube-api-access-msrjh\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.711916 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv84l\" (UniqueName: \"kubernetes.io/projected/afce4d4f-f308-4581-be34-e782d95c89f3-kube-api-access-lv84l\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.711925 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7747ed1a-72b3-4273-baf1-f34f1ab95760-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.711938 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb9jd\" (UniqueName: \"kubernetes.io/projected/6f6c7de8-26a7-41ce-a452-6d392be91fe6-kube-api-access-fb9jd\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.711947 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnxvm\" (UniqueName: \"kubernetes.io/projected/58c92c68-c226-47e8-b01a-3946123ed402-kube-api-access-hnxvm\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.711956 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll42h\" (UniqueName: \"kubernetes.io/projected/c292df44-db14-4c45-8bd5-a0bd2da5e92f-kube-api-access-ll42h\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.712103 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33472d78-2ff8-4741-bfa6-c85d46fa60ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33472d78-2ff8-4741-bfa6-c85d46fa60ae" (UID: "33472d78-2ff8-4741-bfa6-c85d46fa60ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.716074 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33472d78-2ff8-4741-bfa6-c85d46fa60ae-kube-api-access-mq8pg" (OuterVolumeSpecName: "kube-api-access-mq8pg") pod "33472d78-2ff8-4741-bfa6-c85d46fa60ae" (UID: "33472d78-2ff8-4741-bfa6-c85d46fa60ae"). InnerVolumeSpecName "kube-api-access-mq8pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.814484 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33472d78-2ff8-4741-bfa6-c85d46fa60ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.814517 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq8pg\" (UniqueName: \"kubernetes.io/projected/33472d78-2ff8-4741-bfa6-c85d46fa60ae-kube-api-access-mq8pg\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.824041 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.916067 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5kk\" (UniqueName: \"kubernetes.io/projected/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-kube-api-access-qg5kk\") pod \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.916521 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-operator-scripts\") pod \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\" (UID: \"b49278e3-c0b6-4bb6-ab6f-49386ea52e68\") " Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.917486 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b49278e3-c0b6-4bb6-ab6f-49386ea52e68" (UID: "b49278e3-c0b6-4bb6-ab6f-49386ea52e68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:49 crc kubenswrapper[4787]: I0219 19:40:49.920830 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-kube-api-access-qg5kk" (OuterVolumeSpecName: "kube-api-access-qg5kk") pod "b49278e3-c0b6-4bb6-ab6f-49386ea52e68" (UID: "b49278e3-c0b6-4bb6-ab6f-49386ea52e68"). InnerVolumeSpecName "kube-api-access-qg5kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.021262 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5kk\" (UniqueName: \"kubernetes.io/projected/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-kube-api-access-qg5kk\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.021295 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b49278e3-c0b6-4bb6-ab6f-49386ea52e68-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.251328 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-v8g95" event={"ID":"6f6c7de8-26a7-41ce-a452-6d392be91fe6","Type":"ContainerDied","Data":"0101b9c0e68743f3cf7648945103721d2ebf4966b31e00ba9015f329f3c726b1"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.251672 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0101b9c0e68743f3cf7648945103721d2ebf4966b31e00ba9015f329f3c726b1" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.251387 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-v8g95" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.253437 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-045b-account-create-update-nntlf" event={"ID":"33472d78-2ff8-4741-bfa6-c85d46fa60ae","Type":"ContainerDied","Data":"29dcdc8841ff2a572898a2ce15128ad22f23a07f3aa23d38c825cc6fe228ba1f"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.253477 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29dcdc8841ff2a572898a2ce15128ad22f23a07f3aa23d38c825cc6fe228ba1f" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.253454 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-045b-account-create-update-nntlf" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.260338 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mpb5r" event={"ID":"58c92c68-c226-47e8-b01a-3946123ed402","Type":"ContainerDied","Data":"846ccef482831f9766de80c7152878a1396ed92f394848ad6d36fd589f7f677a"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.260381 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846ccef482831f9766de80c7152878a1396ed92f394848ad6d36fd589f7f677a" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.260385 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mpb5r" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.261745 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ca57-account-create-update-6vtmg" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.261744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ca57-account-create-update-6vtmg" event={"ID":"afce4d4f-f308-4581-be34-e782d95c89f3","Type":"ContainerDied","Data":"569b1e3631678ffd59624e01ff7185303e1d647e56ae8090d7251e6ed41f109d"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.261786 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="569b1e3631678ffd59624e01ff7185303e1d647e56ae8090d7251e6ed41f109d" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.276863 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3109e7cb-bd74-40d5-a2ab-deb7a9794d44","Type":"ContainerStarted","Data":"3f62b1528c8e2dba9c4c8984bcb5849aebfdde7531ba233e9156597e5bfa8f0e"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.281472 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-sq5bc" event={"ID":"c292df44-db14-4c45-8bd5-a0bd2da5e92f","Type":"ContainerDied","Data":"35c17e73be6a30ffd56d7e88137d36d8378afb238f55f5c099fd8e3c64e575f2"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.281553 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35c17e73be6a30ffd56d7e88137d36d8378afb238f55f5c099fd8e3c64e575f2" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.281735 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sq5bc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.283948 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-870d-account-create-update-pdvj4" event={"ID":"b49278e3-c0b6-4bb6-ab6f-49386ea52e68","Type":"ContainerDied","Data":"f33acc3a8ca12b3c720319e07b1d7221b962e3ca30a8e5294328ce20c93f85d4"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.284014 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f33acc3a8ca12b3c720319e07b1d7221b962e3ca30a8e5294328ce20c93f85d4" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.284104 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-870d-account-create-update-pdvj4" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.299717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-57ea-account-create-update-tvq42" event={"ID":"7747ed1a-72b3-4273-baf1-f34f1ab95760","Type":"ContainerDied","Data":"553a297cb38543f91343aef43a9280b29884ed6703266c6455a821f8cfb7ff28"} Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.299775 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553a297cb38543f91343aef43a9280b29884ed6703266c6455a821f8cfb7ff28" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.299745 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-57ea-account-create-update-tvq42" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.497358 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.009200882 podStartE2EDuration="46.497334452s" podCreationTimestamp="2026-02-19 19:40:04 +0000 UTC" firstStartedPulling="2026-02-19 19:40:38.701721188 +0000 UTC m=+1306.492387120" lastFinishedPulling="2026-02-19 19:40:47.189854748 +0000 UTC m=+1314.980520690" observedRunningTime="2026-02-19 19:40:50.328476915 +0000 UTC m=+1318.119142867" watchObservedRunningTime="2026-02-19 19:40:50.497334452 +0000 UTC m=+1318.288000394" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.647300 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qxzhm"] Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.700878 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zn9pc"] Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701339 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b660e059-e520-4aa9-898b-28346b096b31" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701360 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b660e059-e520-4aa9-898b-28346b096b31" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701387 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c292df44-db14-4c45-8bd5-a0bd2da5e92f" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701398 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c292df44-db14-4c45-8bd5-a0bd2da5e92f" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701417 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33472d78-2ff8-4741-bfa6-c85d46fa60ae" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701425 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="33472d78-2ff8-4741-bfa6-c85d46fa60ae" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701442 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afce4d4f-f308-4581-be34-e782d95c89f3" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701449 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="afce4d4f-f308-4581-be34-e782d95c89f3" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701470 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c92c68-c226-47e8-b01a-3946123ed402" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701477 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c92c68-c226-47e8-b01a-3946123ed402" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701494 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6c7de8-26a7-41ce-a452-6d392be91fe6" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701501 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6c7de8-26a7-41ce-a452-6d392be91fe6" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701514 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49278e3-c0b6-4bb6-ab6f-49386ea52e68" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701521 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49278e3-c0b6-4bb6-ab6f-49386ea52e68" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: E0219 19:40:50.701540 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7747ed1a-72b3-4273-baf1-f34f1ab95760" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.701548 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7747ed1a-72b3-4273-baf1-f34f1ab95760" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709064 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7747ed1a-72b3-4273-baf1-f34f1ab95760" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709106 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49278e3-c0b6-4bb6-ab6f-49386ea52e68" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709134 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c92c68-c226-47e8-b01a-3946123ed402" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709153 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="33472d78-2ff8-4741-bfa6-c85d46fa60ae" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709167 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="afce4d4f-f308-4581-be34-e782d95c89f3" containerName="mariadb-account-create-update" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709181 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b660e059-e520-4aa9-898b-28346b096b31" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709193 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c292df44-db14-4c45-8bd5-a0bd2da5e92f" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.709207 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6c7de8-26a7-41ce-a452-6d392be91fe6" containerName="mariadb-database-create" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.710649 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.712870 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.754187 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zn9pc"] Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.840843 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fgjg\" (UniqueName: \"kubernetes.io/projected/3995d1ff-649e-42b8-87c9-325a423587a9-kube-api-access-5fgjg\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.840958 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.841016 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.841055 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.841074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.841125 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-config\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.942653 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fgjg\" (UniqueName: \"kubernetes.io/projected/3995d1ff-649e-42b8-87c9-325a423587a9-kube-api-access-5fgjg\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.942723 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.942784 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.942821 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.942840 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.942889 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-config\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.943725 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-config\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.944439 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.945061 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.945654 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.946175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:50 crc kubenswrapper[4787]: I0219 19:40:50.984001 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fgjg\" (UniqueName: \"kubernetes.io/projected/3995d1ff-649e-42b8-87c9-325a423587a9-kube-api-access-5fgjg\") pod \"dnsmasq-dns-74f6bcbc87-zn9pc\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:51 crc kubenswrapper[4787]: I0219 19:40:51.040213 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:52 crc kubenswrapper[4787]: I0219 19:40:52.260037 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:52 crc kubenswrapper[4787]: I0219 19:40:52.267205 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:52 crc kubenswrapper[4787]: I0219 19:40:52.318268 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" containerName="dnsmasq-dns" containerID="cri-o://f753ade8cce6a371eced07045d011d81cacdf6d3741dcc405940ae6076ee63d9" gracePeriod=10 Feb 19 19:40:52 crc kubenswrapper[4787]: I0219 19:40:52.323183 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 19:40:53 crc kubenswrapper[4787]: I0219 19:40:53.351585 4787 generic.go:334] "Generic (PLEG): container finished" podID="febd637e-6511-4aba-add1-8c52808c1f2e" containerID="f753ade8cce6a371eced07045d011d81cacdf6d3741dcc405940ae6076ee63d9" exitCode=0 Feb 19 19:40:53 crc kubenswrapper[4787]: I0219 19:40:53.351676 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" event={"ID":"febd637e-6511-4aba-add1-8c52808c1f2e","Type":"ContainerDied","Data":"f753ade8cce6a371eced07045d011d81cacdf6d3741dcc405940ae6076ee63d9"} Feb 19 19:40:53 crc kubenswrapper[4787]: I0219 19:40:53.925445 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.032240 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-dns-svc\") pod \"febd637e-6511-4aba-add1-8c52808c1f2e\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.032339 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zh2l\" (UniqueName: \"kubernetes.io/projected/febd637e-6511-4aba-add1-8c52808c1f2e-kube-api-access-7zh2l\") pod \"febd637e-6511-4aba-add1-8c52808c1f2e\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.032390 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-nb\") pod \"febd637e-6511-4aba-add1-8c52808c1f2e\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.032430 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-sb\") pod \"febd637e-6511-4aba-add1-8c52808c1f2e\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.032468 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-config\") pod \"febd637e-6511-4aba-add1-8c52808c1f2e\" (UID: \"febd637e-6511-4aba-add1-8c52808c1f2e\") " Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.043213 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febd637e-6511-4aba-add1-8c52808c1f2e-kube-api-access-7zh2l" (OuterVolumeSpecName: "kube-api-access-7zh2l") pod "febd637e-6511-4aba-add1-8c52808c1f2e" (UID: "febd637e-6511-4aba-add1-8c52808c1f2e"). InnerVolumeSpecName "kube-api-access-7zh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.100034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "febd637e-6511-4aba-add1-8c52808c1f2e" (UID: "febd637e-6511-4aba-add1-8c52808c1f2e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.101322 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "febd637e-6511-4aba-add1-8c52808c1f2e" (UID: "febd637e-6511-4aba-add1-8c52808c1f2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.108058 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-config" (OuterVolumeSpecName: "config") pod "febd637e-6511-4aba-add1-8c52808c1f2e" (UID: "febd637e-6511-4aba-add1-8c52808c1f2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.109146 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "febd637e-6511-4aba-add1-8c52808c1f2e" (UID: "febd637e-6511-4aba-add1-8c52808c1f2e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.135048 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zh2l\" (UniqueName: \"kubernetes.io/projected/febd637e-6511-4aba-add1-8c52808c1f2e-kube-api-access-7zh2l\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.135082 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.135091 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.135114 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.135122 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd637e-6511-4aba-add1-8c52808c1f2e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:54 crc kubenswrapper[4787]: W0219 19:40:54.246196 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3995d1ff_649e_42b8_87c9_325a423587a9.slice/crio-90f905917a2f22eae8f727222a9f90ac25a3ea018308a8f551c830859d321dac WatchSource:0}: Error finding container 90f905917a2f22eae8f727222a9f90ac25a3ea018308a8f551c830859d321dac: Status 404 returned error can't find the container with id 90f905917a2f22eae8f727222a9f90ac25a3ea018308a8f551c830859d321dac Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.246507 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zn9pc"] Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.370710 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" event={"ID":"febd637e-6511-4aba-add1-8c52808c1f2e","Type":"ContainerDied","Data":"e43de0854c98aa96ecb3e0a2c0c56200fe095092b0c87536aeea773e8b4ca7fa"} Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.370762 4787 scope.go:117] "RemoveContainer" containerID="f753ade8cce6a371eced07045d011d81cacdf6d3741dcc405940ae6076ee63d9" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.370877 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-qxzhm" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.374678 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4nk2w" event={"ID":"3fced1c8-edcc-41a6-a703-3bde87073a5f","Type":"ContainerStarted","Data":"b327f806b4e58662b7ebd479b911d436d4675e9161b0f355f8cb2b5757532bee"} Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.376122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" event={"ID":"3995d1ff-649e-42b8-87c9-325a423587a9","Type":"ContainerStarted","Data":"90f905917a2f22eae8f727222a9f90ac25a3ea018308a8f551c830859d321dac"} Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.410342 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4nk2w" podStartSLOduration=3.209803606 podStartE2EDuration="10.410322582s" podCreationTimestamp="2026-02-19 19:40:44 +0000 UTC" firstStartedPulling="2026-02-19 19:40:46.213273028 +0000 UTC m=+1314.003938970" lastFinishedPulling="2026-02-19 19:40:53.413792004 +0000 UTC m=+1321.204457946" observedRunningTime="2026-02-19 19:40:54.398411303 +0000 UTC m=+1322.189077245" watchObservedRunningTime="2026-02-19 19:40:54.410322582 +0000 UTC m=+1322.200988524" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.419838 4787 scope.go:117] "RemoveContainer" containerID="063691f2d14f85e66441e9223f681d293cbdf64a123d7eb5e30163009215c34f" Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.445221 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qxzhm"] Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.455345 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-qxzhm"] Feb 19 19:40:54 crc kubenswrapper[4787]: I0219 19:40:54.903645 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" path="/var/lib/kubelet/pods/febd637e-6511-4aba-add1-8c52808c1f2e/volumes" Feb 19 19:40:55 crc kubenswrapper[4787]: I0219 19:40:55.388536 4787 generic.go:334] "Generic (PLEG): container finished" podID="3995d1ff-649e-42b8-87c9-325a423587a9" containerID="f91944b4f64a0233e841c291207b7d7250acf7d108fad1266116b3c2c1c00f40" exitCode=0 Feb 19 19:40:55 crc kubenswrapper[4787]: I0219 19:40:55.389807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" event={"ID":"3995d1ff-649e-42b8-87c9-325a423587a9","Type":"ContainerDied","Data":"f91944b4f64a0233e841c291207b7d7250acf7d108fad1266116b3c2c1c00f40"} Feb 19 19:40:56 crc kubenswrapper[4787]: I0219 19:40:56.405397 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" event={"ID":"3995d1ff-649e-42b8-87c9-325a423587a9","Type":"ContainerStarted","Data":"8685a5c8bd8abda579ed05925f7696f03fc97e5c2f12ae5b8c713a13570e8d0e"} Feb 19 19:40:56 crc kubenswrapper[4787]: I0219 19:40:56.406010 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:40:56 crc kubenswrapper[4787]: I0219 19:40:56.461494 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" podStartSLOduration=6.461474711 podStartE2EDuration="6.461474711s" podCreationTimestamp="2026-02-19 19:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:40:56.456100788 +0000 UTC m=+1324.246766760" watchObservedRunningTime="2026-02-19 19:40:56.461474711 +0000 UTC m=+1324.252140663" Feb 19 19:40:57 crc kubenswrapper[4787]: I0219 19:40:57.418888 4787 generic.go:334] "Generic (PLEG): container finished" podID="3fced1c8-edcc-41a6-a703-3bde87073a5f" containerID="b327f806b4e58662b7ebd479b911d436d4675e9161b0f355f8cb2b5757532bee" exitCode=0 Feb 19 19:40:57 crc kubenswrapper[4787]: I0219 19:40:57.418970 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4nk2w" event={"ID":"3fced1c8-edcc-41a6-a703-3bde87073a5f","Type":"ContainerDied","Data":"b327f806b4e58662b7ebd479b911d436d4675e9161b0f355f8cb2b5757532bee"} Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.832806 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.940409 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zmvn\" (UniqueName: \"kubernetes.io/projected/3fced1c8-edcc-41a6-a703-3bde87073a5f-kube-api-access-5zmvn\") pod \"3fced1c8-edcc-41a6-a703-3bde87073a5f\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.940512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-config-data\") pod \"3fced1c8-edcc-41a6-a703-3bde87073a5f\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.940536 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-combined-ca-bundle\") pod \"3fced1c8-edcc-41a6-a703-3bde87073a5f\" (UID: \"3fced1c8-edcc-41a6-a703-3bde87073a5f\") " Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.946352 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fced1c8-edcc-41a6-a703-3bde87073a5f-kube-api-access-5zmvn" (OuterVolumeSpecName: "kube-api-access-5zmvn") pod "3fced1c8-edcc-41a6-a703-3bde87073a5f" (UID: "3fced1c8-edcc-41a6-a703-3bde87073a5f"). InnerVolumeSpecName "kube-api-access-5zmvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.969194 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fced1c8-edcc-41a6-a703-3bde87073a5f" (UID: "3fced1c8-edcc-41a6-a703-3bde87073a5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:58 crc kubenswrapper[4787]: I0219 19:40:58.990535 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-config-data" (OuterVolumeSpecName: "config-data") pod "3fced1c8-edcc-41a6-a703-3bde87073a5f" (UID: "3fced1c8-edcc-41a6-a703-3bde87073a5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.044006 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.044040 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fced1c8-edcc-41a6-a703-3bde87073a5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.044052 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zmvn\" (UniqueName: \"kubernetes.io/projected/3fced1c8-edcc-41a6-a703-3bde87073a5f-kube-api-access-5zmvn\") on node \"crc\" DevicePath \"\"" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.436932 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4nk2w" event={"ID":"3fced1c8-edcc-41a6-a703-3bde87073a5f","Type":"ContainerDied","Data":"853f6eab993bafef9564fa7cf1f74d433ae42089ecf70a421737f8700c080717"} Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.436977 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="853f6eab993bafef9564fa7cf1f74d433ae42089ecf70a421737f8700c080717" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.436979 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4nk2w" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.722431 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zn9pc"] Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.722736 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="dnsmasq-dns" containerID="cri-o://8685a5c8bd8abda579ed05925f7696f03fc97e5c2f12ae5b8c713a13570e8d0e" gracePeriod=10 Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.765887 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6gpd5"] Feb 19 19:40:59 crc kubenswrapper[4787]: E0219 19:40:59.766344 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" containerName="init" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.766362 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" containerName="init" Feb 19 19:40:59 crc kubenswrapper[4787]: E0219 19:40:59.766379 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fced1c8-edcc-41a6-a703-3bde87073a5f" containerName="keystone-db-sync" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.766386 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fced1c8-edcc-41a6-a703-3bde87073a5f" containerName="keystone-db-sync" Feb 19 19:40:59 crc kubenswrapper[4787]: E0219 19:40:59.766414 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" containerName="dnsmasq-dns" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.766420 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" containerName="dnsmasq-dns" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.766599 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="febd637e-6511-4aba-add1-8c52808c1f2e" containerName="dnsmasq-dns" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.766630 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fced1c8-edcc-41a6-a703-3bde87073a5f" containerName="keystone-db-sync" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.767314 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.770103 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.770273 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.770526 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b8sqz" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.770698 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.773055 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.783639 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wt8b8"] Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.785747 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.803714 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6gpd5"] Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.828314 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wt8b8"] Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-scripts\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868114 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-combined-ca-bundle\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868340 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-config-data\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-credential-keys\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868527 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868559 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4z8\" (UniqueName: \"kubernetes.io/projected/38564a28-9826-4ca2-82ee-a4c330461050-kube-api-access-lm4z8\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868625 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-config\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-fernet-keys\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868680 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmn7\" (UniqueName: \"kubernetes.io/projected/f3c39128-b6b9-4e85-8472-ea37c6779753-kube-api-access-mrmn7\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868707 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-svc\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.868776 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.960857 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-9m8s6"] Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.970786 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.970864 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-scripts\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.970897 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-combined-ca-bundle\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.970978 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-config-data\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971026 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971050 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-credential-keys\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971090 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971118 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4z8\" (UniqueName: \"kubernetes.io/projected/38564a28-9826-4ca2-82ee-a4c330461050-kube-api-access-lm4z8\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971143 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-config\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-fernet-keys\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971181 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrmn7\" (UniqueName: \"kubernetes.io/projected/f3c39128-b6b9-4e85-8472-ea37c6779753-kube-api-access-mrmn7\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.971197 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-svc\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.972142 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-svc\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.973410 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.975382 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9m8s6" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.976671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-config\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.976873 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.993673 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-fernet-keys\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.993831 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-config-data\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.994601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:40:59 crc kubenswrapper[4787]: I0219 19:40:59.994884 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-scripts\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.003012 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-credential-keys\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.004837 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-xrtpd" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.004956 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.005123 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9m8s6"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.020365 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4z8\" (UniqueName: \"kubernetes.io/projected/38564a28-9826-4ca2-82ee-a4c330461050-kube-api-access-lm4z8\") pod \"dnsmasq-dns-847c4cc679-wt8b8\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.028301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-combined-ca-bundle\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.056050 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrmn7\" (UniqueName: \"kubernetes.io/projected/f3c39128-b6b9-4e85-8472-ea37c6779753-kube-api-access-mrmn7\") pod \"keystone-bootstrap-6gpd5\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.075841 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrkwg\" (UniqueName: \"kubernetes.io/projected/3d2b70fa-1540-4748-8660-6d1fb44036fe-kube-api-access-hrkwg\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.075907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-config-data\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.076009 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-combined-ca-bundle\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.088229 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.093715 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vlnf7"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.095076 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.117166 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.122699 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.122984 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.123129 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xb2t4" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.153052 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vlnf7"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-combined-ca-bundle\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4k49\" (UniqueName: \"kubernetes.io/projected/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-kube-api-access-c4k49\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177324 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-db-sync-config-data\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177341 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrkwg\" (UniqueName: \"kubernetes.io/projected/3d2b70fa-1540-4748-8660-6d1fb44036fe-kube-api-access-hrkwg\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-combined-ca-bundle\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177393 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-etc-machine-id\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177418 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-config-data\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177489 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-config-data\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.177520 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-scripts\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.180785 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qdlxg"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.182277 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.187961 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-combined-ca-bundle\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.192339 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.192515 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.192637 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrzvh" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.193383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-config-data\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.224132 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrkwg\" (UniqueName: \"kubernetes.io/projected/3d2b70fa-1540-4748-8660-6d1fb44036fe-kube-api-access-hrkwg\") pod \"heat-db-sync-9m8s6\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.253460 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qdlxg"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.279867 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-config\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.279923 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4k49\" (UniqueName: \"kubernetes.io/projected/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-kube-api-access-c4k49\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.279951 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-db-sync-config-data\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.279978 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-combined-ca-bundle\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.280002 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-etc-machine-id\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.280034 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njsnk\" (UniqueName: \"kubernetes.io/projected/e917a58d-1e20-4326-a9a1-44c35c41636c-kube-api-access-njsnk\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.280105 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-config-data\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.280137 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-scripts\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.280159 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-combined-ca-bundle\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.288784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-etc-machine-id\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.303560 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-db-sync-config-data\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.330728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-scripts\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.332685 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.348227 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7vnc9"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.383179 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.388480 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-combined-ca-bundle\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.388708 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tvg4q" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.388876 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-combined-ca-bundle\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.388967 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-config\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.389064 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njsnk\" (UniqueName: \"kubernetes.io/projected/e917a58d-1e20-4326-a9a1-44c35c41636c-kube-api-access-njsnk\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.392248 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-config-data\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.407683 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.408687 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-config\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.418625 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4k49\" (UniqueName: \"kubernetes.io/projected/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-kube-api-access-c4k49\") pod \"cinder-db-sync-vlnf7\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.422165 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-combined-ca-bundle\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.479109 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7vnc9"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.490891 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-combined-ca-bundle\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.490985 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqwx\" (UniqueName: \"kubernetes.io/projected/c3100990-268a-4c84-8e81-ba54457b771b-kube-api-access-wtqwx\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.491078 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-db-sync-config-data\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.491459 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njsnk\" (UniqueName: \"kubernetes.io/projected/e917a58d-1e20-4326-a9a1-44c35c41636c-kube-api-access-njsnk\") pod \"neutron-db-sync-qdlxg\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.567911 4787 generic.go:334] "Generic (PLEG): container finished" podID="3995d1ff-649e-42b8-87c9-325a423587a9" containerID="8685a5c8bd8abda579ed05925f7696f03fc97e5c2f12ae5b8c713a13570e8d0e" exitCode=0 Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.567963 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" event={"ID":"3995d1ff-649e-42b8-87c9-325a423587a9","Type":"ContainerDied","Data":"8685a5c8bd8abda579ed05925f7696f03fc97e5c2f12ae5b8c713a13570e8d0e"} Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.601470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-combined-ca-bundle\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.601548 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqwx\" (UniqueName: \"kubernetes.io/projected/c3100990-268a-4c84-8e81-ba54457b771b-kube-api-access-wtqwx\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.601628 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-db-sync-config-data\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.616717 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-db-sync-config-data\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.625415 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wt8b8"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.638778 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqwx\" (UniqueName: \"kubernetes.io/projected/c3100990-268a-4c84-8e81-ba54457b771b-kube-api-access-wtqwx\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.641761 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-combined-ca-bundle\") pod \"barbican-db-sync-7vnc9\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.649960 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.683026 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.709471 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ktn47"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.738153 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.775583 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.775750 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4g7tj" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.775924 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.792109 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.796504 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sns42"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.799198 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.840762 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ktn47"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.841851 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84qx\" (UniqueName: \"kubernetes.io/projected/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-kube-api-access-c84qx\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.842719 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.842988 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.843154 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf1edd0-4024-4d19-be88-bd8f001052d8-logs\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.843230 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.843308 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6dp\" (UniqueName: \"kubernetes.io/projected/dbf1edd0-4024-4d19-be88-bd8f001052d8-kube-api-access-bf6dp\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.843388 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.843466 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-config-data\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.843587 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-combined-ca-bundle\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.846798 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-config\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.854089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-scripts\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.854193 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sns42"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.869083 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.872014 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.884079 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.884802 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.936458 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.955863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-config\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.955962 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-log-httpd\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956008 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-scripts\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956052 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84qx\" (UniqueName: \"kubernetes.io/projected/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-kube-api-access-c84qx\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956080 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-run-httpd\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956151 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956197 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956218 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-scripts\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956251 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf1edd0-4024-4d19-be88-bd8f001052d8-logs\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956266 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-config-data\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956282 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxfj5\" (UniqueName: \"kubernetes.io/projected/364fd284-971f-4143-94fa-542904ee31fb-kube-api-access-mxfj5\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956299 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6dp\" (UniqueName: \"kubernetes.io/projected/dbf1edd0-4024-4d19-be88-bd8f001052d8-kube-api-access-bf6dp\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956349 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-config-data\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956400 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956429 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-combined-ca-bundle\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.956863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-config\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.963863 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.967029 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf1edd0-4024-4d19-be88-bd8f001052d8-logs\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.971291 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.972270 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-combined-ca-bundle\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.983364 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.993459 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84qx\" (UniqueName: \"kubernetes.io/projected/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-kube-api-access-c84qx\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:00 crc kubenswrapper[4787]: I0219 19:41:00.997645 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-config-data\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.004030 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-scripts\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.014291 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6dp\" (UniqueName: \"kubernetes.io/projected/dbf1edd0-4024-4d19-be88-bd8f001052d8-kube-api-access-bf6dp\") pod \"placement-db-sync-ktn47\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.022782 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sns42\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.040330 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.044435 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.047445 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-kjfdf" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.047715 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.047836 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.049922 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058386 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-scripts\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-config-data\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058435 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxfj5\" (UniqueName: \"kubernetes.io/projected/364fd284-971f-4143-94fa-542904ee31fb-kube-api-access-mxfj5\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058489 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-log-httpd\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.058635 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-run-httpd\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.059210 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-run-httpd\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.060757 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.062489 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-log-httpd\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.062594 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.064175 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-scripts\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.073283 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.090975 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-config-data\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.094178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxfj5\" (UniqueName: \"kubernetes.io/projected/364fd284-971f-4143-94fa-542904ee31fb-kube-api-access-mxfj5\") pod \"ceilometer-0\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.117510 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.142947 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.160589 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161039 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161119 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161191 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161230 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-logs\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161327 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161356 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.161391 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnjz\" (UniqueName: \"kubernetes.io/projected/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-kube-api-access-gbnjz\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.162964 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.166986 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.174077 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.174276 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.193469 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.202332 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.206016 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.253453 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6gpd5"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.262802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.262863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.262903 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psjxx\" (UniqueName: \"kubernetes.io/projected/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-kube-api-access-psjxx\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.262938 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.262991 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.263012 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.263030 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.263055 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-logs\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-logs\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264470 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264524 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnjz\" (UniqueName: \"kubernetes.io/projected/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-kube-api-access-gbnjz\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264597 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264657 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.264729 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.270079 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.270879 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.275344 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.276151 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.276198 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ed3d7c97b215637ba9b7943cd9135b129ba954894c5daafc1dc18f3b0039582/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.278130 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.285168 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wt8b8"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.285958 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.298237 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnjz\" (UniqueName: \"kubernetes.io/projected/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-kube-api-access-gbnjz\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.365567 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-swift-storage-0\") pod \"3995d1ff-649e-42b8-87c9-325a423587a9\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.365753 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-svc\") pod \"3995d1ff-649e-42b8-87c9-325a423587a9\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.365807 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-nb\") pod \"3995d1ff-649e-42b8-87c9-325a423587a9\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.365838 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-config\") pod \"3995d1ff-649e-42b8-87c9-325a423587a9\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366000 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fgjg\" (UniqueName: \"kubernetes.io/projected/3995d1ff-649e-42b8-87c9-325a423587a9-kube-api-access-5fgjg\") pod \"3995d1ff-649e-42b8-87c9-325a423587a9\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366028 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-sb\") pod \"3995d1ff-649e-42b8-87c9-325a423587a9\" (UID: \"3995d1ff-649e-42b8-87c9-325a423587a9\") " Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366259 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366318 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psjxx\" (UniqueName: \"kubernetes.io/projected/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-kube-api-access-psjxx\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366402 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.366440 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.367678 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.367733 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.367754 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.378316 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-logs\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.378559 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.385584 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.385637 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/039867c40e5afa02457a0a3be85bbd756f1b9686f252b64fbed40b83ae37d83d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.395470 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.416793 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.420517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.440154 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psjxx\" (UniqueName: \"kubernetes.io/projected/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-kube-api-access-psjxx\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.456500 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9m8s6"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.459394 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.463466 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3995d1ff-649e-42b8-87c9-325a423587a9-kube-api-access-5fgjg" (OuterVolumeSpecName: "kube-api-access-5fgjg") pod "3995d1ff-649e-42b8-87c9-325a423587a9" (UID: "3995d1ff-649e-42b8-87c9-325a423587a9"). InnerVolumeSpecName "kube-api-access-5fgjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.464756 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.477054 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fgjg\" (UniqueName: \"kubernetes.io/projected/3995d1ff-649e-42b8-87c9-325a423587a9-kube-api-access-5fgjg\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:01 crc kubenswrapper[4787]: W0219 19:41:01.485109 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d2b70fa_1540_4748_8660_6d1fb44036fe.slice/crio-defefb4f2a14a465e0dce525a6312abc590012e98ef7001db1f7473c4fb9972d WatchSource:0}: Error finding container defefb4f2a14a465e0dce525a6312abc590012e98ef7001db1f7473c4fb9972d: Status 404 returned error can't find the container with id defefb4f2a14a465e0dce525a6312abc590012e98ef7001db1f7473c4fb9972d Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.520621 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.529683 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3995d1ff-649e-42b8-87c9-325a423587a9" (UID: "3995d1ff-649e-42b8-87c9-325a423587a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.540285 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3995d1ff-649e-42b8-87c9-325a423587a9" (UID: "3995d1ff-649e-42b8-87c9-325a423587a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.548509 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.587478 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.588160 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.588185 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.597716 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3995d1ff-649e-42b8-87c9-325a423587a9" (UID: "3995d1ff-649e-42b8-87c9-325a423587a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.597749 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3995d1ff-649e-42b8-87c9-325a423587a9" (UID: "3995d1ff-649e-42b8-87c9-325a423587a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.597916 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-config" (OuterVolumeSpecName: "config") pod "3995d1ff-649e-42b8-87c9-325a423587a9" (UID: "3995d1ff-649e-42b8-87c9-325a423587a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.624847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gpd5" event={"ID":"f3c39128-b6b9-4e85-8472-ea37c6779753","Type":"ContainerStarted","Data":"eb93cca404ebf84234a2defda65b5dcbc5f4f50855fcc8a7a1ad4436ac94a17d"} Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.626211 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" event={"ID":"38564a28-9826-4ca2-82ee-a4c330461050","Type":"ContainerStarted","Data":"05c08e2adc346fd2726518e655aca08c94463c051d2dec3971195833283daccd"} Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.627857 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" event={"ID":"3995d1ff-649e-42b8-87c9-325a423587a9","Type":"ContainerDied","Data":"90f905917a2f22eae8f727222a9f90ac25a3ea018308a8f551c830859d321dac"} Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.627888 4787 scope.go:117] "RemoveContainer" containerID="8685a5c8bd8abda579ed05925f7696f03fc97e5c2f12ae5b8c713a13570e8d0e" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.628014 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.641209 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9m8s6" event={"ID":"3d2b70fa-1540-4748-8660-6d1fb44036fe","Type":"ContainerStarted","Data":"defefb4f2a14a465e0dce525a6312abc590012e98ef7001db1f7473c4fb9972d"} Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.693291 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.693555 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.693564 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3995d1ff-649e-42b8-87c9-325a423587a9-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.706659 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zn9pc"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.722816 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-zn9pc"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.741224 4787 scope.go:117] "RemoveContainer" containerID="f91944b4f64a0233e841c291207b7d7250acf7d108fad1266116b3c2c1c00f40" Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.896789 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qdlxg"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.936144 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vlnf7"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.972934 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7vnc9"] Feb 19 19:41:01 crc kubenswrapper[4787]: I0219 19:41:01.988511 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ktn47"] Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.181673 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:41:02 crc kubenswrapper[4787]: W0219 19:41:02.219915 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod364fd284_971f_4143_94fa_542904ee31fb.slice/crio-1a77c2480f738a99a588846a88589dfd3942d0bf3523bf6adbefc8b61b2893e8 WatchSource:0}: Error finding container 1a77c2480f738a99a588846a88589dfd3942d0bf3523bf6adbefc8b61b2893e8: Status 404 returned error can't find the container with id 1a77c2480f738a99a588846a88589dfd3942d0bf3523bf6adbefc8b61b2893e8 Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.229041 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sns42"] Feb 19 19:41:02 crc kubenswrapper[4787]: W0219 19:41:02.244068 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4903eba_3da3_4b4a_b6ad_4db4e1fd3f81.slice/crio-7a5f9bb6ec116529822a96d4782ae8d9b164060a291df10a2c1353689f78b5d5 WatchSource:0}: Error finding container 7a5f9bb6ec116529822a96d4782ae8d9b164060a291df10a2c1353689f78b5d5: Status 404 returned error can't find the container with id 7a5f9bb6ec116529822a96d4782ae8d9b164060a291df10a2c1353689f78b5d5 Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.288154 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.389846 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.486092 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.711789 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktn47" event={"ID":"dbf1edd0-4024-4d19-be88-bd8f001052d8","Type":"ContainerStarted","Data":"2acd3079d07936c8a1f36d82650a5dcb646ec7dd890630104520e66b5a822c6d"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.714030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gpd5" event={"ID":"f3c39128-b6b9-4e85-8472-ea37c6779753","Type":"ContainerStarted","Data":"ff61d5e424a7aaa77f090d13c53e03d63779546af46704b7c03c1958840db54f"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.717119 4787 generic.go:334] "Generic (PLEG): container finished" podID="38564a28-9826-4ca2-82ee-a4c330461050" containerID="5a76b5f770b212f15a75e704326e9769e54d818b64dc3c76942e6482534b0a90" exitCode=0 Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.717186 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" event={"ID":"38564a28-9826-4ca2-82ee-a4c330461050","Type":"ContainerDied","Data":"5a76b5f770b212f15a75e704326e9769e54d818b64dc3c76942e6482534b0a90"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.730802 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerStarted","Data":"1a77c2480f738a99a588846a88589dfd3942d0bf3523bf6adbefc8b61b2893e8"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.791263 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" event={"ID":"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81","Type":"ContainerStarted","Data":"7a5f9bb6ec116529822a96d4782ae8d9b164060a291df10a2c1353689f78b5d5"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.868757 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdlxg" event={"ID":"e917a58d-1e20-4326-a9a1-44c35c41636c","Type":"ContainerStarted","Data":"dfe8dca608ea0e3d0aa8a872d7bb42c345d6d1babe546ff08c4757eb7c5a638e"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.868805 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdlxg" event={"ID":"e917a58d-1e20-4326-a9a1-44c35c41636c","Type":"ContainerStarted","Data":"7f05952938b4bca500033dc07b8edce4406a64a83f9e1c75a3e10b2b588cf2f9"} Feb 19 19:41:02 crc kubenswrapper[4787]: I0219 19:41:02.872173 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6gpd5" podStartSLOduration=3.872159403 podStartE2EDuration="3.872159403s" podCreationTimestamp="2026-02-19 19:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:02.776872141 +0000 UTC m=+1330.567538083" watchObservedRunningTime="2026-02-19 19:41:02.872159403 +0000 UTC m=+1330.662825345" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.142954 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" path="/var/lib/kubelet/pods/3995d1ff-649e-42b8-87c9-325a423587a9/volumes" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.144074 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.144100 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7vnc9" event={"ID":"c3100990-268a-4c84-8e81-ba54457b771b","Type":"ContainerStarted","Data":"e65974311dcf3ade34e44c41efac41fb0d46a64d2f8035b7bc42dcf99292c7ba"} Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.144117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlnf7" event={"ID":"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d","Type":"ContainerStarted","Data":"97762f09ec100ba927e6eb486eb7615fa847294f3bf371c9191d29bc9b8f1b93"} Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.152169 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qdlxg" podStartSLOduration=3.152147714 podStartE2EDuration="3.152147714s" podCreationTimestamp="2026-02-19 19:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:02.917805692 +0000 UTC m=+1330.708471634" watchObservedRunningTime="2026-02-19 19:41:03.152147714 +0000 UTC m=+1330.942813656" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.549036 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.610592 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-config\") pod \"38564a28-9826-4ca2-82ee-a4c330461050\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.610951 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm4z8\" (UniqueName: \"kubernetes.io/projected/38564a28-9826-4ca2-82ee-a4c330461050-kube-api-access-lm4z8\") pod \"38564a28-9826-4ca2-82ee-a4c330461050\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.610975 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-swift-storage-0\") pod \"38564a28-9826-4ca2-82ee-a4c330461050\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.611029 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-svc\") pod \"38564a28-9826-4ca2-82ee-a4c330461050\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.611215 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-nb\") pod \"38564a28-9826-4ca2-82ee-a4c330461050\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.611326 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-sb\") pod \"38564a28-9826-4ca2-82ee-a4c330461050\" (UID: \"38564a28-9826-4ca2-82ee-a4c330461050\") " Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.625370 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38564a28-9826-4ca2-82ee-a4c330461050-kube-api-access-lm4z8" (OuterVolumeSpecName: "kube-api-access-lm4z8") pod "38564a28-9826-4ca2-82ee-a4c330461050" (UID: "38564a28-9826-4ca2-82ee-a4c330461050"). InnerVolumeSpecName "kube-api-access-lm4z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.653958 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38564a28-9826-4ca2-82ee-a4c330461050" (UID: "38564a28-9826-4ca2-82ee-a4c330461050"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.655298 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-config" (OuterVolumeSpecName: "config") pod "38564a28-9826-4ca2-82ee-a4c330461050" (UID: "38564a28-9826-4ca2-82ee-a4c330461050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.662715 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38564a28-9826-4ca2-82ee-a4c330461050" (UID: "38564a28-9826-4ca2-82ee-a4c330461050"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.700248 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38564a28-9826-4ca2-82ee-a4c330461050" (UID: "38564a28-9826-4ca2-82ee-a4c330461050"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.708839 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.709312 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38564a28-9826-4ca2-82ee-a4c330461050" (UID: "38564a28-9826-4ca2-82ee-a4c330461050"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.723152 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.723496 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.723765 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm4z8\" (UniqueName: \"kubernetes.io/projected/38564a28-9826-4ca2-82ee-a4c330461050-kube-api-access-lm4z8\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.723796 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.723807 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.723818 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38564a28-9826-4ca2-82ee-a4c330461050-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.985771 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46","Type":"ContainerStarted","Data":"6718001e51ece37d6a6a1db487a89a51b82d2e1ac27e366229c2d4bc33779555"} Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.990664 4787 generic.go:334] "Generic (PLEG): container finished" podID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerID="fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac" exitCode=0 Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.990782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" event={"ID":"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81","Type":"ContainerDied","Data":"fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac"} Feb 19 19:41:03 crc kubenswrapper[4787]: I0219 19:41:03.995435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b1ab4f79-6425-4845-bdb8-c330cbc4fabc","Type":"ContainerStarted","Data":"f7306843983323a847265b535bfdb6309fa7038a14f82c3546160c0e4fb3ebfe"} Feb 19 19:41:04 crc kubenswrapper[4787]: I0219 19:41:04.004239 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" Feb 19 19:41:04 crc kubenswrapper[4787]: I0219 19:41:04.004583 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-wt8b8" event={"ID":"38564a28-9826-4ca2-82ee-a4c330461050","Type":"ContainerDied","Data":"05c08e2adc346fd2726518e655aca08c94463c051d2dec3971195833283daccd"} Feb 19 19:41:04 crc kubenswrapper[4787]: I0219 19:41:04.004655 4787 scope.go:117] "RemoveContainer" containerID="5a76b5f770b212f15a75e704326e9769e54d818b64dc3c76942e6482534b0a90" Feb 19 19:41:04 crc kubenswrapper[4787]: I0219 19:41:04.302572 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wt8b8"] Feb 19 19:41:04 crc kubenswrapper[4787]: I0219 19:41:04.322450 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-wt8b8"] Feb 19 19:41:05 crc kubenswrapper[4787]: I0219 19:41:05.016693 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38564a28-9826-4ca2-82ee-a4c330461050" path="/var/lib/kubelet/pods/38564a28-9826-4ca2-82ee-a4c330461050/volumes" Feb 19 19:41:05 crc kubenswrapper[4787]: I0219 19:41:05.021260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b1ab4f79-6425-4845-bdb8-c330cbc4fabc","Type":"ContainerStarted","Data":"6a6cbef940859b7322c03fb01860ad3d88792dff8b38fd16273b052b11a6ce11"} Feb 19 19:41:05 crc kubenswrapper[4787]: I0219 19:41:05.032859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46","Type":"ContainerStarted","Data":"7d8d157c58b6b2d82fd3769785d466dc075385f684a439fca2f2866708ef4d1f"} Feb 19 19:41:05 crc kubenswrapper[4787]: I0219 19:41:05.038099 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" event={"ID":"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81","Type":"ContainerStarted","Data":"811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36"} Feb 19 19:41:05 crc kubenswrapper[4787]: I0219 19:41:05.038381 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:05 crc kubenswrapper[4787]: I0219 19:41:05.068046 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" podStartSLOduration=5.068027693 podStartE2EDuration="5.068027693s" podCreationTimestamp="2026-02-19 19:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:05.064282046 +0000 UTC m=+1332.854947988" watchObservedRunningTime="2026-02-19 19:41:05.068027693 +0000 UTC m=+1332.858693635" Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.042009 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-zn9pc" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.065487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b1ab4f79-6425-4845-bdb8-c330cbc4fabc","Type":"ContainerStarted","Data":"ffa03ce64e9d4be33ca71b047b67543593e4b19603d33b436b811fa5000e6645"} Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.065593 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-log" containerID="cri-o://6a6cbef940859b7322c03fb01860ad3d88792dff8b38fd16273b052b11a6ce11" gracePeriod=30 Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.065745 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-httpd" containerID="cri-o://ffa03ce64e9d4be33ca71b047b67543593e4b19603d33b436b811fa5000e6645" gracePeriod=30 Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.070006 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-log" containerID="cri-o://7d8d157c58b6b2d82fd3769785d466dc075385f684a439fca2f2866708ef4d1f" gracePeriod=30 Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.070073 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46","Type":"ContainerStarted","Data":"f0b4ea1005ccbf3fd5c8189755307c86cac6f61e1b1ec24b1ffd126737d0e449"} Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.070359 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-httpd" containerID="cri-o://f0b4ea1005ccbf3fd5c8189755307c86cac6f61e1b1ec24b1ffd126737d0e449" gracePeriod=30 Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.105279 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.10526039 podStartE2EDuration="6.10526039s" podCreationTimestamp="2026-02-19 19:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:06.093038062 +0000 UTC m=+1333.883704004" watchObservedRunningTime="2026-02-19 19:41:06.10526039 +0000 UTC m=+1333.895926332" Feb 19 19:41:06 crc kubenswrapper[4787]: I0219 19:41:06.128013 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.127991337 podStartE2EDuration="7.127991337s" podCreationTimestamp="2026-02-19 19:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:06.118492267 +0000 UTC m=+1333.909158209" watchObservedRunningTime="2026-02-19 19:41:06.127991337 +0000 UTC m=+1333.918657279" Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.097765 4787 generic.go:334] "Generic (PLEG): container finished" podID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerID="f0b4ea1005ccbf3fd5c8189755307c86cac6f61e1b1ec24b1ffd126737d0e449" exitCode=0 Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.098235 4787 generic.go:334] "Generic (PLEG): container finished" podID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerID="7d8d157c58b6b2d82fd3769785d466dc075385f684a439fca2f2866708ef4d1f" exitCode=143 Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.097964 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46","Type":"ContainerDied","Data":"f0b4ea1005ccbf3fd5c8189755307c86cac6f61e1b1ec24b1ffd126737d0e449"} Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.098320 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46","Type":"ContainerDied","Data":"7d8d157c58b6b2d82fd3769785d466dc075385f684a439fca2f2866708ef4d1f"} Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.105800 4787 generic.go:334] "Generic (PLEG): container finished" podID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerID="ffa03ce64e9d4be33ca71b047b67543593e4b19603d33b436b811fa5000e6645" exitCode=0 Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.105822 4787 generic.go:334] "Generic (PLEG): container finished" podID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerID="6a6cbef940859b7322c03fb01860ad3d88792dff8b38fd16273b052b11a6ce11" exitCode=143 Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.105889 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b1ab4f79-6425-4845-bdb8-c330cbc4fabc","Type":"ContainerDied","Data":"ffa03ce64e9d4be33ca71b047b67543593e4b19603d33b436b811fa5000e6645"} Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.105909 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b1ab4f79-6425-4845-bdb8-c330cbc4fabc","Type":"ContainerDied","Data":"6a6cbef940859b7322c03fb01860ad3d88792dff8b38fd16273b052b11a6ce11"} Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.113754 4787 generic.go:334] "Generic (PLEG): container finished" podID="f3c39128-b6b9-4e85-8472-ea37c6779753" containerID="ff61d5e424a7aaa77f090d13c53e03d63779546af46704b7c03c1958840db54f" exitCode=0 Feb 19 19:41:07 crc kubenswrapper[4787]: I0219 19:41:07.113799 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gpd5" event={"ID":"f3c39128-b6b9-4e85-8472-ea37c6779753","Type":"ContainerDied","Data":"ff61d5e424a7aaa77f090d13c53e03d63779546af46704b7c03c1958840db54f"} Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.631028 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.757728 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrmn7\" (UniqueName: \"kubernetes.io/projected/f3c39128-b6b9-4e85-8472-ea37c6779753-kube-api-access-mrmn7\") pod \"f3c39128-b6b9-4e85-8472-ea37c6779753\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.758034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-fernet-keys\") pod \"f3c39128-b6b9-4e85-8472-ea37c6779753\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.758098 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-combined-ca-bundle\") pod \"f3c39128-b6b9-4e85-8472-ea37c6779753\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.758182 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-scripts\") pod \"f3c39128-b6b9-4e85-8472-ea37c6779753\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.758215 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-credential-keys\") pod \"f3c39128-b6b9-4e85-8472-ea37c6779753\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.758259 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-config-data\") pod \"f3c39128-b6b9-4e85-8472-ea37c6779753\" (UID: \"f3c39128-b6b9-4e85-8472-ea37c6779753\") " Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.764462 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-scripts" (OuterVolumeSpecName: "scripts") pod "f3c39128-b6b9-4e85-8472-ea37c6779753" (UID: "f3c39128-b6b9-4e85-8472-ea37c6779753"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.765742 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c39128-b6b9-4e85-8472-ea37c6779753-kube-api-access-mrmn7" (OuterVolumeSpecName: "kube-api-access-mrmn7") pod "f3c39128-b6b9-4e85-8472-ea37c6779753" (UID: "f3c39128-b6b9-4e85-8472-ea37c6779753"). InnerVolumeSpecName "kube-api-access-mrmn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.765778 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f3c39128-b6b9-4e85-8472-ea37c6779753" (UID: "f3c39128-b6b9-4e85-8472-ea37c6779753"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.766404 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f3c39128-b6b9-4e85-8472-ea37c6779753" (UID: "f3c39128-b6b9-4e85-8472-ea37c6779753"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.801633 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-config-data" (OuterVolumeSpecName: "config-data") pod "f3c39128-b6b9-4e85-8472-ea37c6779753" (UID: "f3c39128-b6b9-4e85-8472-ea37c6779753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.805840 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3c39128-b6b9-4e85-8472-ea37c6779753" (UID: "f3c39128-b6b9-4e85-8472-ea37c6779753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.860909 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrmn7\" (UniqueName: \"kubernetes.io/projected/f3c39128-b6b9-4e85-8472-ea37c6779753-kube-api-access-mrmn7\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.860948 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.860962 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.860972 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.860983 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:08 crc kubenswrapper[4787]: I0219 19:41:08.860995 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3c39128-b6b9-4e85-8472-ea37c6779753-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:09 crc kubenswrapper[4787]: E0219 19:41:09.065244 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3c39128_b6b9_4e85_8472_ea37c6779753.slice\": RecentStats: unable to find data in memory cache]" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.138003 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6gpd5" event={"ID":"f3c39128-b6b9-4e85-8472-ea37c6779753","Type":"ContainerDied","Data":"eb93cca404ebf84234a2defda65b5dcbc5f4f50855fcc8a7a1ad4436ac94a17d"} Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.138364 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb93cca404ebf84234a2defda65b5dcbc5f4f50855fcc8a7a1ad4436ac94a17d" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.138051 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6gpd5" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.226155 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6gpd5"] Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.239883 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6gpd5"] Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.314357 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-x77cx"] Feb 19 19:41:09 crc kubenswrapper[4787]: E0219 19:41:09.314809 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="init" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.314830 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="init" Feb 19 19:41:09 crc kubenswrapper[4787]: E0219 19:41:09.314868 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38564a28-9826-4ca2-82ee-a4c330461050" containerName="init" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.314877 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="38564a28-9826-4ca2-82ee-a4c330461050" containerName="init" Feb 19 19:41:09 crc kubenswrapper[4787]: E0219 19:41:09.314890 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c39128-b6b9-4e85-8472-ea37c6779753" containerName="keystone-bootstrap" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.314905 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c39128-b6b9-4e85-8472-ea37c6779753" containerName="keystone-bootstrap" Feb 19 19:41:09 crc kubenswrapper[4787]: E0219 19:41:09.314919 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="dnsmasq-dns" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.314925 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="dnsmasq-dns" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.315120 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="38564a28-9826-4ca2-82ee-a4c330461050" containerName="init" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.315140 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3995d1ff-649e-42b8-87c9-325a423587a9" containerName="dnsmasq-dns" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.315161 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c39128-b6b9-4e85-8472-ea37c6779753" containerName="keystone-bootstrap" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.315877 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.318706 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.318950 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b8sqz" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.319092 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.319237 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.320249 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.341006 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x77cx"] Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.380916 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-credential-keys\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.381420 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-fernet-keys\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.381465 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-scripts\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.381525 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-combined-ca-bundle\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.381684 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzg26\" (UniqueName: \"kubernetes.io/projected/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-kube-api-access-xzg26\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.382429 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-config-data\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.484212 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-scripts\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.484289 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-combined-ca-bundle\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.484379 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzg26\" (UniqueName: \"kubernetes.io/projected/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-kube-api-access-xzg26\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.484413 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-config-data\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.484491 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-credential-keys\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.485264 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-fernet-keys\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.490082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-combined-ca-bundle\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.490871 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-credential-keys\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.490905 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-scripts\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.491370 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-config-data\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.491781 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-fernet-keys\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.501343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzg26\" (UniqueName: \"kubernetes.io/projected/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-kube-api-access-xzg26\") pod \"keystone-bootstrap-x77cx\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:09 crc kubenswrapper[4787]: I0219 19:41:09.641179 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:10 crc kubenswrapper[4787]: I0219 19:41:10.905120 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c39128-b6b9-4e85-8472-ea37c6779753" path="/var/lib/kubelet/pods/f3c39128-b6b9-4e85-8472-ea37c6779753/volumes" Feb 19 19:41:11 crc kubenswrapper[4787]: I0219 19:41:11.144778 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:11 crc kubenswrapper[4787]: I0219 19:41:11.218337 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mnh6r"] Feb 19 19:41:11 crc kubenswrapper[4787]: I0219 19:41:11.218583 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-mnh6r" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" containerID="cri-o://de9aa46bcb0c6e2b809cc9f5fb6f13c6bb1dcf88d5fc3facfc9b39ebfbed40f0" gracePeriod=10 Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.174092 4787 generic.go:334] "Generic (PLEG): container finished" podID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerID="de9aa46bcb0c6e2b809cc9f5fb6f13c6bb1dcf88d5fc3facfc9b39ebfbed40f0" exitCode=0 Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.174144 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mnh6r" event={"ID":"0fa638ca-98b0-492d-aca0-05e57d565eb0","Type":"ContainerDied","Data":"de9aa46bcb0c6e2b809cc9f5fb6f13c6bb1dcf88d5fc3facfc9b39ebfbed40f0"} Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.681312 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763077 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-public-tls-certs\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763146 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-combined-ca-bundle\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763195 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-scripts\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763294 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-httpd-run\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763332 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-config-data\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763383 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-logs\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763411 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbnjz\" (UniqueName: \"kubernetes.io/projected/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-kube-api-access-gbnjz\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.763539 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\" (UID: \"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46\") " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.764126 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.764208 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-logs" (OuterVolumeSpecName: "logs") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.769356 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-scripts" (OuterVolumeSpecName: "scripts") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.769554 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-kube-api-access-gbnjz" (OuterVolumeSpecName: "kube-api-access-gbnjz") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "kube-api-access-gbnjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.779219 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b" (OuterVolumeSpecName: "glance") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.797750 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.819936 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-config-data" (OuterVolumeSpecName: "config-data") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.829131 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" (UID: "e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866444 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866470 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866481 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866492 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbnjz\" (UniqueName: \"kubernetes.io/projected/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-kube-api-access-gbnjz\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866532 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") on node \"crc\" " Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866541 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866551 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.866560 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.899731 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.899897 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b") on node "crc" Feb 19 19:41:12 crc kubenswrapper[4787]: I0219 19:41:12.978129 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.196989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46","Type":"ContainerDied","Data":"6718001e51ece37d6a6a1db487a89a51b82d2e1ac27e366229c2d4bc33779555"} Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.197509 4787 scope.go:117] "RemoveContainer" containerID="f0b4ea1005ccbf3fd5c8189755307c86cac6f61e1b1ec24b1ffd126737d0e449" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.197217 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.251893 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.265755 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.277266 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:13 crc kubenswrapper[4787]: E0219 19:41:13.277758 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-log" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.277774 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-log" Feb 19 19:41:13 crc kubenswrapper[4787]: E0219 19:41:13.277787 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-httpd" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.277794 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-httpd" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.277979 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-httpd" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.277996 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" containerName="glance-log" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.279207 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.282126 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.282155 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.299699 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385306 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385374 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385415 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-logs\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385561 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4q7\" (UniqueName: \"kubernetes.io/projected/2440e80f-a370-4420-866d-7059aa04e7b3-kube-api-access-nb4q7\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385708 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.385808 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.487990 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.488875 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.488909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.488931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-logs\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.489409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4q7\" (UniqueName: \"kubernetes.io/projected/2440e80f-a370-4420-866d-7059aa04e7b3-kube-api-access-nb4q7\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.489482 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-logs\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.489507 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.489538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.489702 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.490122 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.492871 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.492908 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ed3d7c97b215637ba9b7943cd9135b129ba954894c5daafc1dc18f3b0039582/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.492950 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-config-data\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.493178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-scripts\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.494683 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.498252 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.507309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4q7\" (UniqueName: \"kubernetes.io/projected/2440e80f-a370-4420-866d-7059aa04e7b3-kube-api-access-nb4q7\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.538063 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " pod="openstack/glance-default-external-api-0" Feb 19 19:41:13 crc kubenswrapper[4787]: I0219 19:41:13.603573 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:41:14 crc kubenswrapper[4787]: I0219 19:41:14.902904 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46" path="/var/lib/kubelet/pods/e1bcf1d5-3a24-4a0d-8f10-a9fd03bb3e46/volumes" Feb 19 19:41:15 crc kubenswrapper[4787]: I0219 19:41:15.238140 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mnh6r" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Feb 19 19:41:20 crc kubenswrapper[4787]: I0219 19:41:20.237954 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mnh6r" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Feb 19 19:41:25 crc kubenswrapper[4787]: I0219 19:41:25.237347 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mnh6r" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Feb 19 19:41:25 crc kubenswrapper[4787]: I0219 19:41:25.237996 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:41:27 crc kubenswrapper[4787]: E0219 19:41:27.976884 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 19 19:41:27 crc kubenswrapper[4787]: E0219 19:41:27.977412 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n674hf5h7dh67ch554h65fh9ch564h5f4h554h66bh64fh6dh564h65h96hb9h648h84h5cdh64dh4hdfh677hf7h79h64bhb9h665hc9h67bh66cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxfj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(364fd284-971f-4143-94fa-542904ee31fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.111171 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223155 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psjxx\" (UniqueName: \"kubernetes.io/projected/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-kube-api-access-psjxx\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223242 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-httpd-run\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223296 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-config-data\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223396 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-scripts\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223470 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-internal-tls-certs\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223661 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223799 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-combined-ca-bundle\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.223819 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-logs\") pod \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\" (UID: \"b1ab4f79-6425-4845-bdb8-c330cbc4fabc\") " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.224252 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-logs" (OuterVolumeSpecName: "logs") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.224412 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.224976 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.224995 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.236227 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-scripts" (OuterVolumeSpecName: "scripts") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.244372 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-kube-api-access-psjxx" (OuterVolumeSpecName: "kube-api-access-psjxx") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "kube-api-access-psjxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.247316 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2" (OuterVolumeSpecName: "glance") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.286434 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.293012 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.312435 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-config-data" (OuterVolumeSpecName: "config-data") pod "b1ab4f79-6425-4845-bdb8-c330cbc4fabc" (UID: "b1ab4f79-6425-4845-bdb8-c330cbc4fabc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.328154 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.328366 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.328578 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") on node \"crc\" " Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.328699 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.328783 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psjxx\" (UniqueName: \"kubernetes.io/projected/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-kube-api-access-psjxx\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.328860 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ab4f79-6425-4845-bdb8-c330cbc4fabc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.351850 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b1ab4f79-6425-4845-bdb8-c330cbc4fabc","Type":"ContainerDied","Data":"f7306843983323a847265b535bfdb6309fa7038a14f82c3546160c0e4fb3ebfe"} Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.351947 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.359134 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.359288 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2") on node "crc" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.437244 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.450920 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.461900 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.492721 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:28 crc kubenswrapper[4787]: E0219 19:41:28.493522 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-log" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.493545 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-log" Feb 19 19:41:28 crc kubenswrapper[4787]: E0219 19:41:28.493589 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-httpd" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.493596 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-httpd" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.493847 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-log" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.493889 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" containerName="glance-httpd" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.495397 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.499713 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.499974 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.507984 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.539539 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.539593 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-logs\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.540036 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.540150 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.540358 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.540471 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.540615 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.540677 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgfb\" (UniqueName: \"kubernetes.io/projected/2039e8f8-5048-488a-bc3a-82a3d4389943-kube-api-access-9zgfb\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644306 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644454 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644497 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgfb\" (UniqueName: \"kubernetes.io/projected/2039e8f8-5048-488a-bc3a-82a3d4389943-kube-api-access-9zgfb\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644577 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644600 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-logs\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644718 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644755 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.644824 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.645719 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-logs\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.645755 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.647170 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.647203 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/039867c40e5afa02457a0a3be85bbd756f1b9686f252b64fbed40b83ae37d83d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.649510 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.650859 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.651382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.652304 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.662160 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgfb\" (UniqueName: \"kubernetes.io/projected/2039e8f8-5048-488a-bc3a-82a3d4389943-kube-api-access-9zgfb\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.694580 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: E0219 19:41:28.786211 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 19 19:41:28 crc kubenswrapper[4787]: E0219 19:41:28.786376 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtqwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-7vnc9_openstack(c3100990-268a-4c84-8e81-ba54457b771b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:41:28 crc kubenswrapper[4787]: E0219 19:41:28.787572 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-7vnc9" podUID="c3100990-268a-4c84-8e81-ba54457b771b" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.821704 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:28 crc kubenswrapper[4787]: I0219 19:41:28.906953 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ab4f79-6425-4845-bdb8-c330cbc4fabc" path="/var/lib/kubelet/pods/b1ab4f79-6425-4845-bdb8-c330cbc4fabc/volumes" Feb 19 19:41:29 crc kubenswrapper[4787]: E0219 19:41:29.065219 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 19 19:41:29 crc kubenswrapper[4787]: E0219 19:41:29.065440 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrkwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-9m8s6_openstack(3d2b70fa-1540-4748-8660-6d1fb44036fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:41:29 crc kubenswrapper[4787]: E0219 19:41:29.066650 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-9m8s6" podUID="3d2b70fa-1540-4748-8660-6d1fb44036fe" Feb 19 19:41:29 crc kubenswrapper[4787]: I0219 19:41:29.079211 4787 scope.go:117] "RemoveContainer" containerID="7d8d157c58b6b2d82fd3769785d466dc075385f684a439fca2f2866708ef4d1f" Feb 19 19:41:29 crc kubenswrapper[4787]: E0219 19:41:29.365956 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-7vnc9" podUID="c3100990-268a-4c84-8e81-ba54457b771b" Feb 19 19:41:29 crc kubenswrapper[4787]: E0219 19:41:29.366317 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-9m8s6" podUID="3d2b70fa-1540-4748-8660-6d1fb44036fe" Feb 19 19:41:30 crc kubenswrapper[4787]: E0219 19:41:30.104969 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 19 19:41:30 crc kubenswrapper[4787]: E0219 19:41:30.105719 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4k49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vlnf7_openstack(0a5aa867-a7f1-4a64-a8cd-d515fb1e210d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:41:30 crc kubenswrapper[4787]: E0219 19:41:30.106836 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vlnf7" podUID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.114418 4787 scope.go:117] "RemoveContainer" containerID="ffa03ce64e9d4be33ca71b047b67543593e4b19603d33b436b811fa5000e6645" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.228922 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.279022 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6fh9\" (UniqueName: \"kubernetes.io/projected/0fa638ca-98b0-492d-aca0-05e57d565eb0-kube-api-access-d6fh9\") pod \"0fa638ca-98b0-492d-aca0-05e57d565eb0\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.279355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-sb\") pod \"0fa638ca-98b0-492d-aca0-05e57d565eb0\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.279530 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-dns-svc\") pod \"0fa638ca-98b0-492d-aca0-05e57d565eb0\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.279594 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-config\") pod \"0fa638ca-98b0-492d-aca0-05e57d565eb0\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.279862 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-nb\") pod \"0fa638ca-98b0-492d-aca0-05e57d565eb0\" (UID: \"0fa638ca-98b0-492d-aca0-05e57d565eb0\") " Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.288159 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa638ca-98b0-492d-aca0-05e57d565eb0-kube-api-access-d6fh9" (OuterVolumeSpecName: "kube-api-access-d6fh9") pod "0fa638ca-98b0-492d-aca0-05e57d565eb0" (UID: "0fa638ca-98b0-492d-aca0-05e57d565eb0"). InnerVolumeSpecName "kube-api-access-d6fh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.346580 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fa638ca-98b0-492d-aca0-05e57d565eb0" (UID: "0fa638ca-98b0-492d-aca0-05e57d565eb0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.352340 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fa638ca-98b0-492d-aca0-05e57d565eb0" (UID: "0fa638ca-98b0-492d-aca0-05e57d565eb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.364528 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-config" (OuterVolumeSpecName: "config") pod "0fa638ca-98b0-492d-aca0-05e57d565eb0" (UID: "0fa638ca-98b0-492d-aca0-05e57d565eb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.380365 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mnh6r" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.380391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mnh6r" event={"ID":"0fa638ca-98b0-492d-aca0-05e57d565eb0","Type":"ContainerDied","Data":"2e1ee4b3b14de3202075d02625aaa9cecdebcb321fa7c75ce759f7836bc1705b"} Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.386071 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6fh9\" (UniqueName: \"kubernetes.io/projected/0fa638ca-98b0-492d-aca0-05e57d565eb0-kube-api-access-d6fh9\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.386114 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.386159 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.386174 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:30 crc kubenswrapper[4787]: E0219 19:41:30.388350 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vlnf7" podUID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.397594 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0fa638ca-98b0-492d-aca0-05e57d565eb0" (UID: "0fa638ca-98b0-492d-aca0-05e57d565eb0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.489985 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0fa638ca-98b0-492d-aca0-05e57d565eb0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.574465 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-x77cx"] Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.664431 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.665977 4787 scope.go:117] "RemoveContainer" containerID="6a6cbef940859b7322c03fb01860ad3d88792dff8b38fd16273b052b11a6ce11" Feb 19 19:41:30 crc kubenswrapper[4787]: W0219 19:41:30.736492 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2440e80f_a370_4420_866d_7059aa04e7b3.slice/crio-9b7fd0c7f26afb9a2db21f831fe69c7d6246fce4b9de03987ec43383360b1269 WatchSource:0}: Error finding container 9b7fd0c7f26afb9a2db21f831fe69c7d6246fce4b9de03987ec43383360b1269: Status 404 returned error can't find the container with id 9b7fd0c7f26afb9a2db21f831fe69c7d6246fce4b9de03987ec43383360b1269 Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.737354 4787 scope.go:117] "RemoveContainer" containerID="de9aa46bcb0c6e2b809cc9f5fb6f13c6bb1dcf88d5fc3facfc9b39ebfbed40f0" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.752744 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mnh6r"] Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.766757 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mnh6r"] Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.777772 4787 scope.go:117] "RemoveContainer" containerID="d003e35a67cc1d25a900af4e31ed688bca71144d5f0f6d4e1fecc509f1f991d1" Feb 19 19:41:30 crc kubenswrapper[4787]: I0219 19:41:30.904405 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" path="/var/lib/kubelet/pods/0fa638ca-98b0-492d-aca0-05e57d565eb0/volumes" Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.134307 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.395425 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerStarted","Data":"aa99fdb689a7fdeeee3b5b7a5f1a775ca3ab107fd54347c9c50ed4639ec382ae"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.397491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x77cx" event={"ID":"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d","Type":"ContainerStarted","Data":"3f298000d71cb4c816726423190729260496dab182211a05859d7ba65f6cd853"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.397520 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x77cx" event={"ID":"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d","Type":"ContainerStarted","Data":"bed2702be14880a86b9ffe8a01458e7aec4ab33a7fbe01078dcd31dcb352d1b7"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.400188 4787 generic.go:334] "Generic (PLEG): container finished" podID="e917a58d-1e20-4326-a9a1-44c35c41636c" containerID="dfe8dca608ea0e3d0aa8a872d7bb42c345d6d1babe546ff08c4757eb7c5a638e" exitCode=0 Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.400238 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdlxg" event={"ID":"e917a58d-1e20-4326-a9a1-44c35c41636c","Type":"ContainerDied","Data":"dfe8dca608ea0e3d0aa8a872d7bb42c345d6d1babe546ff08c4757eb7c5a638e"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.402011 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2440e80f-a370-4420-866d-7059aa04e7b3","Type":"ContainerStarted","Data":"9b7fd0c7f26afb9a2db21f831fe69c7d6246fce4b9de03987ec43383360b1269"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.403983 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2039e8f8-5048-488a-bc3a-82a3d4389943","Type":"ContainerStarted","Data":"2b742fc62001b18d99622ecee22a331a553c71be6428700df09fc3355825a4df"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.408827 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktn47" event={"ID":"dbf1edd0-4024-4d19-be88-bd8f001052d8","Type":"ContainerStarted","Data":"8c93defefa4d9b3c69242ed2499139e38c0f1a1e427eb0d613b65075084e8f0c"} Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.421796 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-x77cx" podStartSLOduration=22.421779749 podStartE2EDuration="22.421779749s" podCreationTimestamp="2026-02-19 19:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:31.416430817 +0000 UTC m=+1359.207096759" watchObservedRunningTime="2026-02-19 19:41:31.421779749 +0000 UTC m=+1359.212445691" Feb 19 19:41:31 crc kubenswrapper[4787]: I0219 19:41:31.436508 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ktn47" podStartSLOduration=4.389709464 podStartE2EDuration="31.436487138s" podCreationTimestamp="2026-02-19 19:41:00 +0000 UTC" firstStartedPulling="2026-02-19 19:41:01.992528974 +0000 UTC m=+1329.783194916" lastFinishedPulling="2026-02-19 19:41:29.039306648 +0000 UTC m=+1356.829972590" observedRunningTime="2026-02-19 19:41:31.43304514 +0000 UTC m=+1359.223711082" watchObservedRunningTime="2026-02-19 19:41:31.436487138 +0000 UTC m=+1359.227153080" Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.425444 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2440e80f-a370-4420-866d-7059aa04e7b3","Type":"ContainerStarted","Data":"552a698ec62fc6a022b1253b3ca460481ea8903094fbbc878ec9dcdbaecd8be2"} Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.425826 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2440e80f-a370-4420-866d-7059aa04e7b3","Type":"ContainerStarted","Data":"4b46856917ee73ddd229286822ffc328d13b1c26602c1a4f7d7d7151a00ea032"} Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.430810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2039e8f8-5048-488a-bc3a-82a3d4389943","Type":"ContainerStarted","Data":"0276fba5db63f2a08f74ce737f460ef19cde912a4ab2d72f6d83e2d3fe1937f7"} Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.430893 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2039e8f8-5048-488a-bc3a-82a3d4389943","Type":"ContainerStarted","Data":"1805ac619773501a73de490092b5050dc1a1af5fc129fdb4ecc00e28689a3f0f"} Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.463921 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.463903015 podStartE2EDuration="19.463903015s" podCreationTimestamp="2026-02-19 19:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:32.447718195 +0000 UTC m=+1360.238384157" watchObservedRunningTime="2026-02-19 19:41:32.463903015 +0000 UTC m=+1360.254568957" Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.488277 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.488253889 podStartE2EDuration="4.488253889s" podCreationTimestamp="2026-02-19 19:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:32.477911834 +0000 UTC m=+1360.268577786" watchObservedRunningTime="2026-02-19 19:41:32.488253889 +0000 UTC m=+1360.278919831" Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.902947 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.956852 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-combined-ca-bundle\") pod \"e917a58d-1e20-4326-a9a1-44c35c41636c\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.956961 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-config\") pod \"e917a58d-1e20-4326-a9a1-44c35c41636c\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.957015 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njsnk\" (UniqueName: \"kubernetes.io/projected/e917a58d-1e20-4326-a9a1-44c35c41636c-kube-api-access-njsnk\") pod \"e917a58d-1e20-4326-a9a1-44c35c41636c\" (UID: \"e917a58d-1e20-4326-a9a1-44c35c41636c\") " Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.963224 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e917a58d-1e20-4326-a9a1-44c35c41636c-kube-api-access-njsnk" (OuterVolumeSpecName: "kube-api-access-njsnk") pod "e917a58d-1e20-4326-a9a1-44c35c41636c" (UID: "e917a58d-1e20-4326-a9a1-44c35c41636c"). InnerVolumeSpecName "kube-api-access-njsnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.989974 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-config" (OuterVolumeSpecName: "config") pod "e917a58d-1e20-4326-a9a1-44c35c41636c" (UID: "e917a58d-1e20-4326-a9a1-44c35c41636c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:32 crc kubenswrapper[4787]: I0219 19:41:32.999110 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e917a58d-1e20-4326-a9a1-44c35c41636c" (UID: "e917a58d-1e20-4326-a9a1-44c35c41636c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.062964 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.063011 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e917a58d-1e20-4326-a9a1-44c35c41636c-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.063025 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njsnk\" (UniqueName: \"kubernetes.io/projected/e917a58d-1e20-4326-a9a1-44c35c41636c-kube-api-access-njsnk\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.441565 4787 generic.go:334] "Generic (PLEG): container finished" podID="dbf1edd0-4024-4d19-be88-bd8f001052d8" containerID="8c93defefa4d9b3c69242ed2499139e38c0f1a1e427eb0d613b65075084e8f0c" exitCode=0 Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.441646 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktn47" event={"ID":"dbf1edd0-4024-4d19-be88-bd8f001052d8","Type":"ContainerDied","Data":"8c93defefa4d9b3c69242ed2499139e38c0f1a1e427eb0d613b65075084e8f0c"} Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.446123 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qdlxg" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.447762 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qdlxg" event={"ID":"e917a58d-1e20-4326-a9a1-44c35c41636c","Type":"ContainerDied","Data":"7f05952938b4bca500033dc07b8edce4406a64a83f9e1c75a3e10b2b588cf2f9"} Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.447793 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f05952938b4bca500033dc07b8edce4406a64a83f9e1c75a3e10b2b588cf2f9" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.604362 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.604729 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.616371 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79cgv"] Feb 19 19:41:33 crc kubenswrapper[4787]: E0219 19:41:33.619255 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e917a58d-1e20-4326-a9a1-44c35c41636c" containerName="neutron-db-sync" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.619415 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e917a58d-1e20-4326-a9a1-44c35c41636c" containerName="neutron-db-sync" Feb 19 19:41:33 crc kubenswrapper[4787]: E0219 19:41:33.619655 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="init" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.620167 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="init" Feb 19 19:41:33 crc kubenswrapper[4787]: E0219 19:41:33.620271 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.620343 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.620713 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa638ca-98b0-492d-aca0-05e57d565eb0" containerName="dnsmasq-dns" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.620824 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e917a58d-1e20-4326-a9a1-44c35c41636c" containerName="neutron-db-sync" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.622851 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.631952 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79cgv"] Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.706089 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.727940 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.750229 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8549458c84-mvvtk"] Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.752510 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.777885 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrzvh" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.780914 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.781560 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.783124 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.783893 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-config\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.783988 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.784111 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.784296 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.784425 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.784586 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tnkg\" (UniqueName: \"kubernetes.io/projected/0eae0600-9f79-4c97-9af4-c5f2ac5de934-kube-api-access-9tnkg\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.809148 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8549458c84-mvvtk"] Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.899154 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-config\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.899518 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-httpd-config\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.899687 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-combined-ca-bundle\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900085 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900205 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900533 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900642 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-ovndb-tls-certs\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tnkg\" (UniqueName: \"kubernetes.io/projected/0eae0600-9f79-4c97-9af4-c5f2ac5de934-kube-api-access-9tnkg\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900885 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pckxh\" (UniqueName: \"kubernetes.io/projected/748246f5-7418-4111-b5bb-599732020b19-kube-api-access-pckxh\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.901086 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.901086 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-config\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900095 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-config\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.900597 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.902380 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.903019 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.925654 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tnkg\" (UniqueName: \"kubernetes.io/projected/0eae0600-9f79-4c97-9af4-c5f2ac5de934-kube-api-access-9tnkg\") pod \"dnsmasq-dns-55f844cf75-79cgv\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:33 crc kubenswrapper[4787]: I0219 19:41:33.960274 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.006036 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-ovndb-tls-certs\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.006133 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pckxh\" (UniqueName: \"kubernetes.io/projected/748246f5-7418-4111-b5bb-599732020b19-kube-api-access-pckxh\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.006222 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-config\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.006294 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-httpd-config\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.006328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-combined-ca-bundle\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.021699 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-ovndb-tls-certs\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.021829 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-config\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.023745 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-combined-ca-bundle\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.030048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-httpd-config\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.041026 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pckxh\" (UniqueName: \"kubernetes.io/projected/748246f5-7418-4111-b5bb-599732020b19-kube-api-access-pckxh\") pod \"neutron-8549458c84-mvvtk\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.080418 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.460691 4787 generic.go:334] "Generic (PLEG): container finished" podID="20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" containerID="3f298000d71cb4c816726423190729260496dab182211a05859d7ba65f6cd853" exitCode=0 Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.460756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x77cx" event={"ID":"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d","Type":"ContainerDied","Data":"3f298000d71cb4c816726423190729260496dab182211a05859d7ba65f6cd853"} Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.462462 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 19:41:34 crc kubenswrapper[4787]: I0219 19:41:34.462637 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.023858 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d8c48d785-rdt7v"] Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.026186 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.029944 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.031862 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.042973 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d8c48d785-rdt7v"] Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.164694 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-ovndb-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.164839 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-httpd-config\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.164945 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-combined-ca-bundle\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.165083 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-config\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.165294 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-public-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.165507 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-internal-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.166070 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vv82\" (UniqueName: \"kubernetes.io/projected/62a31ecf-6e1f-474f-99ac-aa021dca2905-kube-api-access-8vv82\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.268618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-ovndb-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.268690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-httpd-config\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.268735 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-combined-ca-bundle\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.268795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-config\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.268843 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-public-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.268917 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-internal-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.269004 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vv82\" (UniqueName: \"kubernetes.io/projected/62a31ecf-6e1f-474f-99ac-aa021dca2905-kube-api-access-8vv82\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.284879 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-internal-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.285004 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-public-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.285552 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-ovndb-tls-certs\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.285702 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-combined-ca-bundle\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.285846 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-httpd-config\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.286259 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-config\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.302766 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vv82\" (UniqueName: \"kubernetes.io/projected/62a31ecf-6e1f-474f-99ac-aa021dca2905-kube-api-access-8vv82\") pod \"neutron-d8c48d785-rdt7v\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:36 crc kubenswrapper[4787]: I0219 19:41:36.370209 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.254786 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6k49"] Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.257448 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.271443 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6k49"] Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.327012 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-catalog-content\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.327396 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-utilities\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.327504 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzh5g\" (UniqueName: \"kubernetes.io/projected/b341cddb-4e14-4928-af2b-18b902d1999c-kube-api-access-tzh5g\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.430128 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-catalog-content\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.430271 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-utilities\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.430303 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzh5g\" (UniqueName: \"kubernetes.io/projected/b341cddb-4e14-4928-af2b-18b902d1999c-kube-api-access-tzh5g\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.430740 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-utilities\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.430746 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-catalog-content\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.453201 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzh5g\" (UniqueName: \"kubernetes.io/projected/b341cddb-4e14-4928-af2b-18b902d1999c-kube-api-access-tzh5g\") pod \"redhat-operators-v6k49\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.590232 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.822447 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.822514 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.857405 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:38 crc kubenswrapper[4787]: I0219 19:41:38.867049 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:39 crc kubenswrapper[4787]: I0219 19:41:39.532645 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:39 crc kubenswrapper[4787]: I0219 19:41:39.532979 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.137032 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.137516 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.137881 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.139157 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.418563 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.485920 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.553398 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf6dp\" (UniqueName: \"kubernetes.io/projected/dbf1edd0-4024-4d19-be88-bd8f001052d8-kube-api-access-bf6dp\") pod \"dbf1edd0-4024-4d19-be88-bd8f001052d8\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.553524 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-combined-ca-bundle\") pod \"dbf1edd0-4024-4d19-be88-bd8f001052d8\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.553631 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-scripts\") pod \"dbf1edd0-4024-4d19-be88-bd8f001052d8\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.553695 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-config-data\") pod \"dbf1edd0-4024-4d19-be88-bd8f001052d8\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.553786 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf1edd0-4024-4d19-be88-bd8f001052d8-logs\") pod \"dbf1edd0-4024-4d19-be88-bd8f001052d8\" (UID: \"dbf1edd0-4024-4d19-be88-bd8f001052d8\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.554763 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf1edd0-4024-4d19-be88-bd8f001052d8-logs" (OuterVolumeSpecName: "logs") pod "dbf1edd0-4024-4d19-be88-bd8f001052d8" (UID: "dbf1edd0-4024-4d19-be88-bd8f001052d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.577798 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-scripts" (OuterVolumeSpecName: "scripts") pod "dbf1edd0-4024-4d19-be88-bd8f001052d8" (UID: "dbf1edd0-4024-4d19-be88-bd8f001052d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.577881 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf1edd0-4024-4d19-be88-bd8f001052d8-kube-api-access-bf6dp" (OuterVolumeSpecName: "kube-api-access-bf6dp") pod "dbf1edd0-4024-4d19-be88-bd8f001052d8" (UID: "dbf1edd0-4024-4d19-be88-bd8f001052d8"). InnerVolumeSpecName "kube-api-access-bf6dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.599749 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-config-data" (OuterVolumeSpecName: "config-data") pod "dbf1edd0-4024-4d19-be88-bd8f001052d8" (UID: "dbf1edd0-4024-4d19-be88-bd8f001052d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.600667 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbf1edd0-4024-4d19-be88-bd8f001052d8" (UID: "dbf1edd0-4024-4d19-be88-bd8f001052d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.625653 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-x77cx" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.625668 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-x77cx" event={"ID":"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d","Type":"ContainerDied","Data":"bed2702be14880a86b9ffe8a01458e7aec4ab33a7fbe01078dcd31dcb352d1b7"} Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.626545 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed2702be14880a86b9ffe8a01458e7aec4ab33a7fbe01078dcd31dcb352d1b7" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.632208 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ktn47" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.632291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ktn47" event={"ID":"dbf1edd0-4024-4d19-be88-bd8f001052d8","Type":"ContainerDied","Data":"2acd3079d07936c8a1f36d82650a5dcb646ec7dd890630104520e66b5a822c6d"} Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.632344 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2acd3079d07936c8a1f36d82650a5dcb646ec7dd890630104520e66b5a822c6d" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.679265 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-config-data\") pod \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.679462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-combined-ca-bundle\") pod \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.679632 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzg26\" (UniqueName: \"kubernetes.io/projected/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-kube-api-access-xzg26\") pod \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.679805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-fernet-keys\") pod \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.679893 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-scripts\") pod \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.679927 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-credential-keys\") pod \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\" (UID: \"20c2ea68-2415-4d8d-88eb-3cf18c4eda8d\") " Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.682052 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf6dp\" (UniqueName: \"kubernetes.io/projected/dbf1edd0-4024-4d19-be88-bd8f001052d8-kube-api-access-bf6dp\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.683104 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.683131 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.683142 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf1edd0-4024-4d19-be88-bd8f001052d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.683157 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf1edd0-4024-4d19-be88-bd8f001052d8-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.686004 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" (UID: "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.703902 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" (UID: "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.707226 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-kube-api-access-xzg26" (OuterVolumeSpecName: "kube-api-access-xzg26") pod "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" (UID: "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d"). InnerVolumeSpecName "kube-api-access-xzg26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.732772 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-scripts" (OuterVolumeSpecName: "scripts") pod "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" (UID: "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.745106 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" (UID: "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.757730 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-config-data" (OuterVolumeSpecName: "config-data") pod "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" (UID: "20c2ea68-2415-4d8d-88eb-3cf18c4eda8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.785909 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.785938 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzg26\" (UniqueName: \"kubernetes.io/projected/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-kube-api-access-xzg26\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.785948 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.785957 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.785967 4787 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.785974 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:43 crc kubenswrapper[4787]: I0219 19:41:43.960958 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8549458c84-mvvtk"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.182878 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79cgv"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.223709 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6k49"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.306753 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d8c48d785-rdt7v"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.646807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7vnc9" event={"ID":"c3100990-268a-4c84-8e81-ba54457b771b","Type":"ContainerStarted","Data":"33c5329e15100f521363d042f0452651923f43c508a25503baff1a8ed5c2315e"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.652945 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerStarted","Data":"040d66b1bc3b8803ac6e7143d2a753aae2b261a653f846aaa70df189cb99ccad"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.657868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerStarted","Data":"e26752663bc11dce1b1d2a46d6e763f7163eaacb9354aa82a4199d264509077d"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.662744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9m8s6" event={"ID":"3d2b70fa-1540-4748-8660-6d1fb44036fe","Type":"ContainerStarted","Data":"ec051585bc6a7852fa175077fb4d757ce922e6fdd160df98d35fcec5b0f477a7"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.670953 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f8c6bbf7c-8r6fm"] Feb 19 19:41:44 crc kubenswrapper[4787]: E0219 19:41:44.671489 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" containerName="keystone-bootstrap" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.671510 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" containerName="keystone-bootstrap" Feb 19 19:41:44 crc kubenswrapper[4787]: E0219 19:41:44.671537 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf1edd0-4024-4d19-be88-bd8f001052d8" containerName="placement-db-sync" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.671544 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf1edd0-4024-4d19-be88-bd8f001052d8" containerName="placement-db-sync" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.671830 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf1edd0-4024-4d19-be88-bd8f001052d8" containerName="placement-db-sync" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.671848 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" containerName="keystone-bootstrap" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.672731 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.680095 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.684845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8549458c84-mvvtk" event={"ID":"748246f5-7418-4111-b5bb-599732020b19","Type":"ContainerStarted","Data":"127ce436378f533333eed531ffbbd97b13d94c9257292dfe3e69c526ce5b1ab0"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.684885 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8549458c84-mvvtk" event={"ID":"748246f5-7418-4111-b5bb-599732020b19","Type":"ContainerStarted","Data":"88f9f5119d324f9c0ec2542c04470c7f8e1b4ba9244fedadcbbac07904c5c5fb"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.690334 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.690551 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.690626 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.691229 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.695361 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b8sqz" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.697570 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8c48d785-rdt7v" event={"ID":"62a31ecf-6e1f-474f-99ac-aa021dca2905","Type":"ContainerStarted","Data":"767cdcba0b07305e5ef1ff6cf40a420149e94f72c2abc2f2cb5377533cffb36b"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.716561 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f8c6bbf7c-8r6fm"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.717513 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7vnc9" podStartSLOduration=3.007317613 podStartE2EDuration="44.717496741s" podCreationTimestamp="2026-02-19 19:41:00 +0000 UTC" firstStartedPulling="2026-02-19 19:41:01.988168129 +0000 UTC m=+1329.778834071" lastFinishedPulling="2026-02-19 19:41:43.698347257 +0000 UTC m=+1371.489013199" observedRunningTime="2026-02-19 19:41:44.695357112 +0000 UTC m=+1372.486023054" watchObservedRunningTime="2026-02-19 19:41:44.717496741 +0000 UTC m=+1372.508162683" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.724421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" event={"ID":"0eae0600-9f79-4c97-9af4-c5f2ac5de934","Type":"ContainerStarted","Data":"a82a58cd6f8e5b64347927897c1fa036e6ef590e31a47af06a359ac15584569a"} Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.794139 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85777598cb-xrn2z"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.796086 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.804356 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.804504 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.804732 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.804798 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4g7tj" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.805022 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.814678 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-combined-ca-bundle\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.814788 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-scripts\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.814808 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-public-tls-certs\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.814829 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-fernet-keys\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.814873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-internal-tls-certs\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.815015 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-config-data\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.815064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbpm\" (UniqueName: \"kubernetes.io/projected/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-kube-api-access-szbpm\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.815101 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-credential-keys\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.864867 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85777598cb-xrn2z"] Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.886245 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-9m8s6" podStartSLOduration=3.602376219 podStartE2EDuration="45.88622024s" podCreationTimestamp="2026-02-19 19:40:59 +0000 UTC" firstStartedPulling="2026-02-19 19:41:01.493785826 +0000 UTC m=+1329.284451758" lastFinishedPulling="2026-02-19 19:41:43.777629837 +0000 UTC m=+1371.568295779" observedRunningTime="2026-02-19 19:41:44.825920281 +0000 UTC m=+1372.616586233" watchObservedRunningTime="2026-02-19 19:41:44.88622024 +0000 UTC m=+1372.676886172" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.917914 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-scripts\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.917968 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-public-tls-certs\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.917995 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-fernet-keys\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918025 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-internal-tls-certs\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918090 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ggx\" (UniqueName: \"kubernetes.io/projected/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-kube-api-access-54ggx\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918115 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-config-data\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918157 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-config-data\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918196 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szbpm\" (UniqueName: \"kubernetes.io/projected/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-kube-api-access-szbpm\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918215 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-internal-tls-certs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918235 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-credential-keys\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918257 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-scripts\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-combined-ca-bundle\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918327 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-logs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918347 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-combined-ca-bundle\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.918370 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-public-tls-certs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.957187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-scripts\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:44 crc kubenswrapper[4787]: I0219 19:41:44.968802 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-fernet-keys\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.008276 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-public-tls-certs\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.008593 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-credential-keys\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.009345 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-combined-ca-bundle\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.017199 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szbpm\" (UniqueName: \"kubernetes.io/projected/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-kube-api-access-szbpm\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.022998 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-logs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.023230 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-public-tls-certs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.023442 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ggx\" (UniqueName: \"kubernetes.io/projected/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-kube-api-access-54ggx\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.023540 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-config-data\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.023960 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-internal-tls-certs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.024080 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-scripts\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.024188 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-combined-ca-bundle\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.025182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-logs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.039448 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-config-data\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.039998 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-config-data\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.040397 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf-internal-tls-certs\") pod \"keystone-5f8c6bbf7c-8r6fm\" (UID: \"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf\") " pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.043176 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-public-tls-certs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.043582 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-internal-tls-certs\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.048069 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.062226 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-combined-ca-bundle\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.072454 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ggx\" (UniqueName: \"kubernetes.io/projected/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-kube-api-access-54ggx\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.092064 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-scripts\") pod \"placement-85777598cb-xrn2z\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.207179 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.272394 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f76547854-2wh5m"] Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.275064 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.331966 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f76547854-2wh5m"] Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333204 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-config-data\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-combined-ca-bundle\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333287 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b1456e-ecae-4668-a7c5-4aea5446af5f-logs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-internal-tls-certs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333371 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-public-tls-certs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333455 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqd4c\" (UniqueName: \"kubernetes.io/projected/05b1456e-ecae-4668-a7c5-4aea5446af5f-kube-api-access-wqd4c\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.333500 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-scripts\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.434966 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqd4c\" (UniqueName: \"kubernetes.io/projected/05b1456e-ecae-4668-a7c5-4aea5446af5f-kube-api-access-wqd4c\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.435280 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-scripts\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.435392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-config-data\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.435449 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-combined-ca-bundle\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.435483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b1456e-ecae-4668-a7c5-4aea5446af5f-logs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.435511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-internal-tls-certs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.435531 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-public-tls-certs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.442955 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05b1456e-ecae-4668-a7c5-4aea5446af5f-logs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.446781 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-scripts\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.449343 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-combined-ca-bundle\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.490428 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqd4c\" (UniqueName: \"kubernetes.io/projected/05b1456e-ecae-4668-a7c5-4aea5446af5f-kube-api-access-wqd4c\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.491031 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-config-data\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.506512 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-public-tls-certs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.508970 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05b1456e-ecae-4668-a7c5-4aea5446af5f-internal-tls-certs\") pod \"placement-5f76547854-2wh5m\" (UID: \"05b1456e-ecae-4668-a7c5-4aea5446af5f\") " pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.664655 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.747998 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8549458c84-mvvtk" event={"ID":"748246f5-7418-4111-b5bb-599732020b19","Type":"ContainerStarted","Data":"96811f62ee8a14b2b827bbee843394ae893bb89375412557884b28626eeeb1cf"} Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.749711 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.752885 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8c48d785-rdt7v" event={"ID":"62a31ecf-6e1f-474f-99ac-aa021dca2905","Type":"ContainerStarted","Data":"7f742740268ad7d8b052e9c387c5baea0b6a65240b85cad06868ba984049e347"} Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.763086 4787 generic.go:334] "Generic (PLEG): container finished" podID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerID="b8377d45bbc85c2f640dfbb024cfd3ebba75f09602ec13ee985995ff46421dc6" exitCode=0 Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.763199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" event={"ID":"0eae0600-9f79-4c97-9af4-c5f2ac5de934","Type":"ContainerDied","Data":"b8377d45bbc85c2f640dfbb024cfd3ebba75f09602ec13ee985995ff46421dc6"} Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.763420 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f8c6bbf7c-8r6fm"] Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.779318 4787 generic.go:334] "Generic (PLEG): container finished" podID="b341cddb-4e14-4928-af2b-18b902d1999c" containerID="225778b6b22ebbc805da1e42b1f565f5dce58b743e9e399720a5eb33961c6c0a" exitCode=0 Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.779361 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerDied","Data":"225778b6b22ebbc805da1e42b1f565f5dce58b743e9e399720a5eb33961c6c0a"} Feb 19 19:41:45 crc kubenswrapper[4787]: I0219 19:41:45.782698 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8549458c84-mvvtk" podStartSLOduration=12.782667213 podStartE2EDuration="12.782667213s" podCreationTimestamp="2026-02-19 19:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:45.777381802 +0000 UTC m=+1373.568047744" watchObservedRunningTime="2026-02-19 19:41:45.782667213 +0000 UTC m=+1373.573333155" Feb 19 19:41:45 crc kubenswrapper[4787]: W0219 19:41:45.789328 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc95c4e6a_386d_49a9_a8ff_0ea1fdc47ecf.slice/crio-336d28339cc6f650381f38df85c30856e25ca52151a1fc32cefecf04c733423b WatchSource:0}: Error finding container 336d28339cc6f650381f38df85c30856e25ca52151a1fc32cefecf04c733423b: Status 404 returned error can't find the container with id 336d28339cc6f650381f38df85c30856e25ca52151a1fc32cefecf04c733423b Feb 19 19:41:46 crc kubenswrapper[4787]: W0219 19:41:46.194598 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded5b42a6_c95d_49ec_afac_3d41f285ed9c.slice/crio-72410a967adadfe3ae8babf97841e9c0c6db1eccf61fa88b0be2615f9e2ca29e WatchSource:0}: Error finding container 72410a967adadfe3ae8babf97841e9c0c6db1eccf61fa88b0be2615f9e2ca29e: Status 404 returned error can't find the container with id 72410a967adadfe3ae8babf97841e9c0c6db1eccf61fa88b0be2615f9e2ca29e Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.200186 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85777598cb-xrn2z"] Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.318585 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.426756 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f76547854-2wh5m"] Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.807786 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f76547854-2wh5m" event={"ID":"05b1456e-ecae-4668-a7c5-4aea5446af5f","Type":"ContainerStarted","Data":"fc54a5f29b33fd2bd416703b4b69a974f77c0a40c855092272766ea800568e0d"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.808137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f76547854-2wh5m" event={"ID":"05b1456e-ecae-4668-a7c5-4aea5446af5f","Type":"ContainerStarted","Data":"f1d1e7345907dfe5bc149ad00681a04824c619ce5627c7deabc0faa59367a500"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.827989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" event={"ID":"0eae0600-9f79-4c97-9af4-c5f2ac5de934","Type":"ContainerStarted","Data":"5ae52d7e615e9a07b6d20ce33046cae5660c2dee725f1f3879c848e83040b5f6"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.829322 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.861916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8c48d785-rdt7v" event={"ID":"62a31ecf-6e1f-474f-99ac-aa021dca2905","Type":"ContainerStarted","Data":"2bc5374b28da7c30db759a0e004f9ec9420ebe0b79447370e3f2a4dd6eb51485"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.863118 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.879878 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85777598cb-xrn2z" event={"ID":"ed5b42a6-c95d-49ec-afac-3d41f285ed9c","Type":"ContainerStarted","Data":"72410a967adadfe3ae8babf97841e9c0c6db1eccf61fa88b0be2615f9e2ca29e"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.880600 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" podStartSLOduration=13.88058673 podStartE2EDuration="13.88058673s" podCreationTimestamp="2026-02-19 19:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:46.868310269 +0000 UTC m=+1374.658976221" watchObservedRunningTime="2026-02-19 19:41:46.88058673 +0000 UTC m=+1374.671252672" Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.909401 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d8c48d785-rdt7v" podStartSLOduration=10.909386748 podStartE2EDuration="10.909386748s" podCreationTimestamp="2026-02-19 19:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:46.897407287 +0000 UTC m=+1374.688073249" watchObservedRunningTime="2026-02-19 19:41:46.909386748 +0000 UTC m=+1374.700052690" Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.929179 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f8c6bbf7c-8r6fm" event={"ID":"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf","Type":"ContainerStarted","Data":"b46375c3d82e2aded9c5655cb880326d55d2deb3620165fdbc913a6d27acb2ba"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.929242 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.929264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f8c6bbf7c-8r6fm" event={"ID":"c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf","Type":"ContainerStarted","Data":"336d28339cc6f650381f38df85c30856e25ca52151a1fc32cefecf04c733423b"} Feb 19 19:41:46 crc kubenswrapper[4787]: I0219 19:41:46.976469 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f8c6bbf7c-8r6fm" podStartSLOduration=2.976446659 podStartE2EDuration="2.976446659s" podCreationTimestamp="2026-02-19 19:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:46.945167285 +0000 UTC m=+1374.735833227" watchObservedRunningTime="2026-02-19 19:41:46.976446659 +0000 UTC m=+1374.767112601" Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.938838 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85777598cb-xrn2z" event={"ID":"ed5b42a6-c95d-49ec-afac-3d41f285ed9c","Type":"ContainerStarted","Data":"0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de"} Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.939244 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85777598cb-xrn2z" event={"ID":"ed5b42a6-c95d-49ec-afac-3d41f285ed9c","Type":"ContainerStarted","Data":"653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d"} Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.939480 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.945426 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f76547854-2wh5m" event={"ID":"05b1456e-ecae-4668-a7c5-4aea5446af5f","Type":"ContainerStarted","Data":"b139b7e3008c3e54074b6e752ebcd6d09b948ecbe3b47adb294508ee0aa70cdf"} Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.945871 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.946022 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.948098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlnf7" event={"ID":"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d","Type":"ContainerStarted","Data":"ade3a529923d763b53fa405a05c7b4f2f5e17f97a837df7a9e128d9fb7c36d40"} Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.951024 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerStarted","Data":"e6a3556b647212ecfff605de992c2e6d0ae9e331c50fd5eef779e2bf272fdd11"} Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.971915 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85777598cb-xrn2z" podStartSLOduration=3.971893499 podStartE2EDuration="3.971893499s" podCreationTimestamp="2026-02-19 19:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:47.966232045 +0000 UTC m=+1375.756897987" watchObservedRunningTime="2026-02-19 19:41:47.971893499 +0000 UTC m=+1375.762559441" Feb 19 19:41:47 crc kubenswrapper[4787]: I0219 19:41:47.991734 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vlnf7" podStartSLOduration=5.237604882 podStartE2EDuration="48.991716379s" podCreationTimestamp="2026-02-19 19:40:59 +0000 UTC" firstStartedPulling="2026-02-19 19:41:01.897164769 +0000 UTC m=+1329.687830711" lastFinishedPulling="2026-02-19 19:41:45.651276266 +0000 UTC m=+1373.441942208" observedRunningTime="2026-02-19 19:41:47.980202574 +0000 UTC m=+1375.770868516" watchObservedRunningTime="2026-02-19 19:41:47.991716379 +0000 UTC m=+1375.782382321" Feb 19 19:41:48 crc kubenswrapper[4787]: I0219 19:41:48.042321 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f76547854-2wh5m" podStartSLOduration=3.042299315 podStartE2EDuration="3.042299315s" podCreationTimestamp="2026-02-19 19:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:48.016189989 +0000 UTC m=+1375.806855931" watchObservedRunningTime="2026-02-19 19:41:48.042299315 +0000 UTC m=+1375.832965247" Feb 19 19:41:48 crc kubenswrapper[4787]: I0219 19:41:48.961450 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:41:49 crc kubenswrapper[4787]: I0219 19:41:49.984636 4787 generic.go:334] "Generic (PLEG): container finished" podID="b341cddb-4e14-4928-af2b-18b902d1999c" containerID="e6a3556b647212ecfff605de992c2e6d0ae9e331c50fd5eef779e2bf272fdd11" exitCode=0 Feb 19 19:41:49 crc kubenswrapper[4787]: I0219 19:41:49.984694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerDied","Data":"e6a3556b647212ecfff605de992c2e6d0ae9e331c50fd5eef779e2bf272fdd11"} Feb 19 19:41:53 crc kubenswrapper[4787]: I0219 19:41:53.019268 4787 generic.go:334] "Generic (PLEG): container finished" podID="c3100990-268a-4c84-8e81-ba54457b771b" containerID="33c5329e15100f521363d042f0452651923f43c508a25503baff1a8ed5c2315e" exitCode=0 Feb 19 19:41:53 crc kubenswrapper[4787]: I0219 19:41:53.019405 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7vnc9" event={"ID":"c3100990-268a-4c84-8e81-ba54457b771b","Type":"ContainerDied","Data":"33c5329e15100f521363d042f0452651923f43c508a25503baff1a8ed5c2315e"} Feb 19 19:41:53 crc kubenswrapper[4787]: I0219 19:41:53.962842 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.032574 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sns42"] Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.033003 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerName="dnsmasq-dns" containerID="cri-o://811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36" gracePeriod=10 Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.146045 4787 generic.go:334] "Generic (PLEG): container finished" podID="3d2b70fa-1540-4748-8660-6d1fb44036fe" containerID="ec051585bc6a7852fa175077fb4d757ce922e6fdd160df98d35fcec5b0f477a7" exitCode=0 Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.146564 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9m8s6" event={"ID":"3d2b70fa-1540-4748-8660-6d1fb44036fe","Type":"ContainerDied","Data":"ec051585bc6a7852fa175077fb4d757ce922e6fdd160df98d35fcec5b0f477a7"} Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.698653 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.741135 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-combined-ca-bundle\") pod \"c3100990-268a-4c84-8e81-ba54457b771b\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.741320 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtqwx\" (UniqueName: \"kubernetes.io/projected/c3100990-268a-4c84-8e81-ba54457b771b-kube-api-access-wtqwx\") pod \"c3100990-268a-4c84-8e81-ba54457b771b\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.741432 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-db-sync-config-data\") pod \"c3100990-268a-4c84-8e81-ba54457b771b\" (UID: \"c3100990-268a-4c84-8e81-ba54457b771b\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.750262 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c3100990-268a-4c84-8e81-ba54457b771b" (UID: "c3100990-268a-4c84-8e81-ba54457b771b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.751812 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3100990-268a-4c84-8e81-ba54457b771b-kube-api-access-wtqwx" (OuterVolumeSpecName: "kube-api-access-wtqwx") pod "c3100990-268a-4c84-8e81-ba54457b771b" (UID: "c3100990-268a-4c84-8e81-ba54457b771b"). InnerVolumeSpecName "kube-api-access-wtqwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.791736 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3100990-268a-4c84-8e81-ba54457b771b" (UID: "c3100990-268a-4c84-8e81-ba54457b771b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.795219 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.843381 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-swift-storage-0\") pod \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.843435 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-nb\") pod \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.843480 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-config\") pod \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.843516 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-sb\") pod \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.843539 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84qx\" (UniqueName: \"kubernetes.io/projected/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-kube-api-access-c84qx\") pod \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.843868 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-svc\") pod \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\" (UID: \"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81\") " Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.844715 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtqwx\" (UniqueName: \"kubernetes.io/projected/c3100990-268a-4c84-8e81-ba54457b771b-kube-api-access-wtqwx\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.844742 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.844755 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3100990-268a-4c84-8e81-ba54457b771b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.859234 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-kube-api-access-c84qx" (OuterVolumeSpecName: "kube-api-access-c84qx") pod "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" (UID: "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81"). InnerVolumeSpecName "kube-api-access-c84qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.908931 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" (UID: "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.922479 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-config" (OuterVolumeSpecName: "config") pod "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" (UID: "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.925299 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" (UID: "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.932882 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" (UID: "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.948792 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.948818 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.948828 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.948837 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.948845 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84qx\" (UniqueName: \"kubernetes.io/projected/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-kube-api-access-c84qx\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:54 crc kubenswrapper[4787]: I0219 19:41:54.961109 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" (UID: "b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.051461 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.159081 4787 generic.go:334] "Generic (PLEG): container finished" podID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerID="811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36" exitCode=0 Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.159178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" event={"ID":"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81","Type":"ContainerDied","Data":"811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36"} Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.159195 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.159228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sns42" event={"ID":"b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81","Type":"ContainerDied","Data":"7a5f9bb6ec116529822a96d4782ae8d9b164060a291df10a2c1353689f78b5d5"} Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.159249 4787 scope.go:117] "RemoveContainer" containerID="811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.163845 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7vnc9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.164352 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7vnc9" event={"ID":"c3100990-268a-4c84-8e81-ba54457b771b","Type":"ContainerDied","Data":"e65974311dcf3ade34e44c41efac41fb0d46a64d2f8035b7bc42dcf99292c7ba"} Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.164391 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e65974311dcf3ade34e44c41efac41fb0d46a64d2f8035b7bc42dcf99292c7ba" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.169732 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerStarted","Data":"5979d1a1099c4a81c967c497327717b7b09b5c839e7246d48b1eac97d1b029ab"} Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.196365 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sns42"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.210910 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sns42"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.304029 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-67b499844-xlhd9"] Feb 19 19:41:55 crc kubenswrapper[4787]: E0219 19:41:55.308801 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerName="init" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.308842 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerName="init" Feb 19 19:41:55 crc kubenswrapper[4787]: E0219 19:41:55.308875 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerName="dnsmasq-dns" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.308884 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerName="dnsmasq-dns" Feb 19 19:41:55 crc kubenswrapper[4787]: E0219 19:41:55.308901 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3100990-268a-4c84-8e81-ba54457b771b" containerName="barbican-db-sync" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.308909 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3100990-268a-4c84-8e81-ba54457b771b" containerName="barbican-db-sync" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.317799 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" containerName="dnsmasq-dns" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.317856 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3100990-268a-4c84-8e81-ba54457b771b" containerName="barbican-db-sync" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.319581 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-755bd8f67-9b2kl"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.319795 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.327202 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.330485 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67b499844-xlhd9"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.331273 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.331648 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.332406 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tvg4q" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.338106 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.347241 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-755bd8f67-9b2kl"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.438920 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vk46m"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.448981 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.469840 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-config-data-custom\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.469883 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-config-data\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.469956 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-combined-ca-bundle\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470032 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-combined-ca-bundle\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470072 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6l9\" (UniqueName: \"kubernetes.io/projected/2d708d6b-a0b3-4add-8c98-0a33b73965ed-kube-api-access-bw6l9\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470093 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d708d6b-a0b3-4add-8c98-0a33b73965ed-logs\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470129 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-config-data\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470146 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljqv\" (UniqueName: \"kubernetes.io/projected/31450605-eeb8-465d-924d-509a32d908ea-kube-api-access-mljqv\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31450605-eeb8-465d-924d-509a32d908ea-logs\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.470192 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-config-data-custom\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.471547 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vk46m"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.511946 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d9b7d8444-dlm9r"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.513708 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.516902 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.563671 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d9b7d8444-dlm9r"] Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574057 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d708d6b-a0b3-4add-8c98-0a33b73965ed-logs\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574113 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574132 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574150 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-config\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574172 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-config-data\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574189 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mljqv\" (UniqueName: \"kubernetes.io/projected/31450605-eeb8-465d-924d-509a32d908ea-kube-api-access-mljqv\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574226 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31450605-eeb8-465d-924d-509a32d908ea-logs\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574243 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574266 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-config-data-custom\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574288 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kdjk\" (UniqueName: \"kubernetes.io/projected/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-kube-api-access-6kdjk\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574309 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-svc\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574329 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-config-data-custom\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-config-data\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574363 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-logs\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574427 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-combined-ca-bundle\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574457 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574494 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xq7\" (UniqueName: \"kubernetes.io/projected/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-kube-api-access-68xq7\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-combined-ca-bundle\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574559 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-combined-ca-bundle\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574577 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data-custom\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.574620 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6l9\" (UniqueName: \"kubernetes.io/projected/2d708d6b-a0b3-4add-8c98-0a33b73965ed-kube-api-access-bw6l9\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.575258 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d708d6b-a0b3-4add-8c98-0a33b73965ed-logs\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.584157 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-config-data\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.585456 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31450605-eeb8-465d-924d-509a32d908ea-logs\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.585518 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-combined-ca-bundle\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.590195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31450605-eeb8-465d-924d-509a32d908ea-config-data-custom\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.598757 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-config-data-custom\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.599493 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-combined-ca-bundle\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.604000 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d708d6b-a0b3-4add-8c98-0a33b73965ed-config-data\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.604742 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6l9\" (UniqueName: \"kubernetes.io/projected/2d708d6b-a0b3-4add-8c98-0a33b73965ed-kube-api-access-bw6l9\") pod \"barbican-keystone-listener-67b499844-xlhd9\" (UID: \"2d708d6b-a0b3-4add-8c98-0a33b73965ed\") " pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.606244 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljqv\" (UniqueName: \"kubernetes.io/projected/31450605-eeb8-465d-924d-509a32d908ea-kube-api-access-mljqv\") pod \"barbican-worker-755bd8f67-9b2kl\" (UID: \"31450605-eeb8-465d-924d-509a32d908ea\") " pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.678853 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.678905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.678944 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-config\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679032 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679080 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kdjk\" (UniqueName: \"kubernetes.io/projected/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-kube-api-access-6kdjk\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679112 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-svc\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679144 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-logs\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679312 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679394 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xq7\" (UniqueName: \"kubernetes.io/projected/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-kube-api-access-68xq7\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679494 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-combined-ca-bundle\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679531 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data-custom\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679830 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.679920 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.680490 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.680809 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-logs\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.680844 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-config\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.683958 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-combined-ca-bundle\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.684328 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67b499844-xlhd9" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.685498 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data-custom\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.688713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-svc\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.694335 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.696915 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xq7\" (UniqueName: \"kubernetes.io/projected/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-kube-api-access-68xq7\") pod \"dnsmasq-dns-85ff748b95-vk46m\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.703085 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kdjk\" (UniqueName: \"kubernetes.io/projected/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-kube-api-access-6kdjk\") pod \"barbican-api-5d9b7d8444-dlm9r\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.734315 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-755bd8f67-9b2kl" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.741857 4787 scope.go:117] "RemoveContainer" containerID="fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac" Feb 19 19:41:55 crc kubenswrapper[4787]: E0219 19:41:55.743696 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="364fd284-971f-4143-94fa-542904ee31fb" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.791058 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.853401 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.892688 4787 scope.go:117] "RemoveContainer" containerID="811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36" Feb 19 19:41:55 crc kubenswrapper[4787]: E0219 19:41:55.897343 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36\": container with ID starting with 811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36 not found: ID does not exist" containerID="811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.897418 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36"} err="failed to get container status \"811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36\": rpc error: code = NotFound desc = could not find container \"811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36\": container with ID starting with 811b21ea5fd34525612e3363cf4e10a10614ecc660cd31a5f647f3d00926dd36 not found: ID does not exist" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.897455 4787 scope.go:117] "RemoveContainer" containerID="fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.904765 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:55 crc kubenswrapper[4787]: E0219 19:41:55.905157 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac\": container with ID starting with fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac not found: ID does not exist" containerID="fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac" Feb 19 19:41:55 crc kubenswrapper[4787]: I0219 19:41:55.905205 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac"} err="failed to get container status \"fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac\": rpc error: code = NotFound desc = could not find container \"fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac\": container with ID starting with fdcad8eee68e4008fc81b919ea8b78b164ca57c630e94896373d426e5be667ac not found: ID does not exist" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.103697 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-config-data\") pod \"3d2b70fa-1540-4748-8660-6d1fb44036fe\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.104087 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrkwg\" (UniqueName: \"kubernetes.io/projected/3d2b70fa-1540-4748-8660-6d1fb44036fe-kube-api-access-hrkwg\") pod \"3d2b70fa-1540-4748-8660-6d1fb44036fe\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.104170 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-combined-ca-bundle\") pod \"3d2b70fa-1540-4748-8660-6d1fb44036fe\" (UID: \"3d2b70fa-1540-4748-8660-6d1fb44036fe\") " Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.161861 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2b70fa-1540-4748-8660-6d1fb44036fe-kube-api-access-hrkwg" (OuterVolumeSpecName: "kube-api-access-hrkwg") pod "3d2b70fa-1540-4748-8660-6d1fb44036fe" (UID: "3d2b70fa-1540-4748-8660-6d1fb44036fe"). InnerVolumeSpecName "kube-api-access-hrkwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.206892 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrkwg\" (UniqueName: \"kubernetes.io/projected/3d2b70fa-1540-4748-8660-6d1fb44036fe-kube-api-access-hrkwg\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.250766 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d2b70fa-1540-4748-8660-6d1fb44036fe" (UID: "3d2b70fa-1540-4748-8660-6d1fb44036fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.307494 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9m8s6" event={"ID":"3d2b70fa-1540-4748-8660-6d1fb44036fe","Type":"ContainerDied","Data":"defefb4f2a14a465e0dce525a6312abc590012e98ef7001db1f7473c4fb9972d"} Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.307555 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="defefb4f2a14a465e0dce525a6312abc590012e98ef7001db1f7473c4fb9972d" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.307718 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9m8s6" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.333340 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.365913 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="ceilometer-notification-agent" containerID="cri-o://aa99fdb689a7fdeeee3b5b7a5f1a775ca3ab107fd54347c9c50ed4639ec382ae" gracePeriod=30 Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.366027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerStarted","Data":"eaf8c48ebf8b26760eb25e605559c8dfde5862db918a32ca2f959880d75b69bf"} Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.366070 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.366115 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="proxy-httpd" containerID="cri-o://eaf8c48ebf8b26760eb25e605559c8dfde5862db918a32ca2f959880d75b69bf" gracePeriod=30 Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.366165 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="sg-core" containerID="cri-o://040d66b1bc3b8803ac6e7143d2a753aae2b261a653f846aaa70df189cb99ccad" gracePeriod=30 Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.398258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-config-data" (OuterVolumeSpecName: "config-data") pod "3d2b70fa-1540-4748-8660-6d1fb44036fe" (UID: "3d2b70fa-1540-4748-8660-6d1fb44036fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.409035 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6k49" podStartSLOduration=10.228508412 podStartE2EDuration="18.40899995s" podCreationTimestamp="2026-02-19 19:41:38 +0000 UTC" firstStartedPulling="2026-02-19 19:41:45.789244409 +0000 UTC m=+1373.579910351" lastFinishedPulling="2026-02-19 19:41:53.969735947 +0000 UTC m=+1381.760401889" observedRunningTime="2026-02-19 19:41:56.398169429 +0000 UTC m=+1384.188835371" watchObservedRunningTime="2026-02-19 19:41:56.40899995 +0000 UTC m=+1384.199665882" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.436461 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d2b70fa-1540-4748-8660-6d1fb44036fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.763150 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-755bd8f67-9b2kl"] Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.795249 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67b499844-xlhd9"] Feb 19 19:41:56 crc kubenswrapper[4787]: I0219 19:41:56.933202 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81" path="/var/lib/kubelet/pods/b4903eba-3da3-4b4a-b6ad-4db4e1fd3f81/volumes" Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.046561 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d9b7d8444-dlm9r"] Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.084740 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vk46m"] Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.376856 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" event={"ID":"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa","Type":"ContainerStarted","Data":"39dfeabba83b5e89e11e748e77c4989161b32bb598387be8e840fa762570a608"} Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.380080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b499844-xlhd9" event={"ID":"2d708d6b-a0b3-4add-8c98-0a33b73965ed","Type":"ContainerStarted","Data":"dee0b3fc78494288d029766f25a60ea0f973cac75849c6366f4b65d4a8386ea3"} Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.382644 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d9b7d8444-dlm9r" event={"ID":"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1","Type":"ContainerStarted","Data":"794153e893f794cc258c72ae3de677b073fd39cb7a8c46596713795effaa7eca"} Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.389275 4787 generic.go:334] "Generic (PLEG): container finished" podID="364fd284-971f-4143-94fa-542904ee31fb" containerID="040d66b1bc3b8803ac6e7143d2a753aae2b261a653f846aaa70df189cb99ccad" exitCode=2 Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.389388 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerDied","Data":"040d66b1bc3b8803ac6e7143d2a753aae2b261a653f846aaa70df189cb99ccad"} Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.394280 4787 generic.go:334] "Generic (PLEG): container finished" podID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" containerID="ade3a529923d763b53fa405a05c7b4f2f5e17f97a837df7a9e128d9fb7c36d40" exitCode=0 Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.394386 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlnf7" event={"ID":"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d","Type":"ContainerDied","Data":"ade3a529923d763b53fa405a05c7b4f2f5e17f97a837df7a9e128d9fb7c36d40"} Feb 19 19:41:57 crc kubenswrapper[4787]: I0219 19:41:57.399228 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-755bd8f67-9b2kl" event={"ID":"31450605-eeb8-465d-924d-509a32d908ea","Type":"ContainerStarted","Data":"5bf19d475963932ef7ef88bb69908e97404e6837868f867dda36ad59613e49fb"} Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.373738 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d48c87bfd-7pzgt"] Feb 19 19:41:58 crc kubenswrapper[4787]: E0219 19:41:58.374545 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2b70fa-1540-4748-8660-6d1fb44036fe" containerName="heat-db-sync" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.374563 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2b70fa-1540-4748-8660-6d1fb44036fe" containerName="heat-db-sync" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.374809 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2b70fa-1540-4748-8660-6d1fb44036fe" containerName="heat-db-sync" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.376266 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.378416 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.378919 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.383395 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-logs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.383434 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-combined-ca-bundle\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.383557 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-public-tls-certs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.383667 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-internal-tls-certs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.383725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-config-data\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.383955 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-config-data-custom\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.384094 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjxx6\" (UniqueName: \"kubernetes.io/projected/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-kube-api-access-xjxx6\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.407216 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d48c87bfd-7pzgt"] Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.460854 4787 generic.go:334] "Generic (PLEG): container finished" podID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerID="5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236" exitCode=0 Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.460978 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" event={"ID":"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa","Type":"ContainerDied","Data":"5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236"} Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488156 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-logs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488203 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-combined-ca-bundle\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488530 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-public-tls-certs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488709 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-internal-tls-certs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-config-data\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488893 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-config-data-custom\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.488960 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjxx6\" (UniqueName: \"kubernetes.io/projected/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-kube-api-access-xjxx6\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.490852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-logs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.499332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d9b7d8444-dlm9r" event={"ID":"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1","Type":"ContainerStarted","Data":"0b1ca01065887e7b42db747fa85ce04f2942a679d5099f91111d8c54d376993d"} Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.499380 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.499390 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d9b7d8444-dlm9r" event={"ID":"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1","Type":"ContainerStarted","Data":"06683d370d53595ebf3a7e94b294ca48f0ba7ee66770da19eb94eef266d08874"} Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.499414 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.509905 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-internal-tls-certs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.517296 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-combined-ca-bundle\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.518082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-public-tls-certs\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.526521 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-config-data\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.548135 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjxx6\" (UniqueName: \"kubernetes.io/projected/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-kube-api-access-xjxx6\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.574550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c827ac-57d9-48c2-adc1-4358fd81e5b1-config-data-custom\") pod \"barbican-api-5d48c87bfd-7pzgt\" (UID: \"f0c827ac-57d9-48c2-adc1-4358fd81e5b1\") " pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.590354 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.592259 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.616685 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d9b7d8444-dlm9r" podStartSLOduration=3.616660409 podStartE2EDuration="3.616660409s" podCreationTimestamp="2026-02-19 19:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:41:58.581028007 +0000 UTC m=+1386.371693939" watchObservedRunningTime="2026-02-19 19:41:58.616660409 +0000 UTC m=+1386.407326351" Feb 19 19:41:58 crc kubenswrapper[4787]: I0219 19:41:58.695421 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.545595 4787 generic.go:334] "Generic (PLEG): container finished" podID="364fd284-971f-4143-94fa-542904ee31fb" containerID="aa99fdb689a7fdeeee3b5b7a5f1a775ca3ab107fd54347c9c50ed4639ec382ae" exitCode=0 Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.546057 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerDied","Data":"aa99fdb689a7fdeeee3b5b7a5f1a775ca3ab107fd54347c9c50ed4639ec382ae"} Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.563662 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vlnf7" event={"ID":"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d","Type":"ContainerDied","Data":"97762f09ec100ba927e6eb486eb7615fa847294f3bf371c9191d29bc9b8f1b93"} Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.563691 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97762f09ec100ba927e6eb486eb7615fa847294f3bf371c9191d29bc9b8f1b93" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.653597 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.727419 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6k49" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" probeResult="failure" output=< Feb 19 19:41:59 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:41:59 crc kubenswrapper[4787]: > Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.821201 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-scripts\") pod \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.821566 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-etc-machine-id\") pod \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.821686 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-combined-ca-bundle\") pod \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.821729 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4k49\" (UniqueName: \"kubernetes.io/projected/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-kube-api-access-c4k49\") pod \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.821768 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-db-sync-config-data\") pod \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.821874 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-config-data\") pod \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\" (UID: \"0a5aa867-a7f1-4a64-a8cd-d515fb1e210d\") " Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.822995 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" (UID: "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.832758 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" (UID: "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.832836 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-scripts" (OuterVolumeSpecName: "scripts") pod "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" (UID: "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.833059 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-kube-api-access-c4k49" (OuterVolumeSpecName: "kube-api-access-c4k49") pod "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" (UID: "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d"). InnerVolumeSpecName "kube-api-access-c4k49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.865837 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" (UID: "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.904362 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d48c87bfd-7pzgt"] Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.928581 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.928638 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.928652 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.928665 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4k49\" (UniqueName: \"kubernetes.io/projected/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-kube-api-access-c4k49\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.928676 4787 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:41:59 crc kubenswrapper[4787]: I0219 19:41:59.931723 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-config-data" (OuterVolumeSpecName: "config-data") pod "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" (UID: "0a5aa867-a7f1-4a64-a8cd-d515fb1e210d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.030245 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.576272 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d48c87bfd-7pzgt" event={"ID":"f0c827ac-57d9-48c2-adc1-4358fd81e5b1","Type":"ContainerStarted","Data":"f5c540c365d757f3f256d9662567e58c8b280fc8b5973546b37e60ef93aad9b7"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.576732 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d48c87bfd-7pzgt" event={"ID":"f0c827ac-57d9-48c2-adc1-4358fd81e5b1","Type":"ContainerStarted","Data":"fd846817f2a70453c68debc2d3186ab4dd9acc515dfade813ac6412f534c8daf"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.576747 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d48c87bfd-7pzgt" event={"ID":"f0c827ac-57d9-48c2-adc1-4358fd81e5b1","Type":"ContainerStarted","Data":"3653f3a4a454b7872a9b3d3f307e3f259a618966ea992a6cc9ded1056929fa4a"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.578171 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.578202 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.582234 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-755bd8f67-9b2kl" event={"ID":"31450605-eeb8-465d-924d-509a32d908ea","Type":"ContainerStarted","Data":"620aaf1904c5c8f6993e40c2494628a94d46d2cbafe57db7fd17469cc7912775"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.582275 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-755bd8f67-9b2kl" event={"ID":"31450605-eeb8-465d-924d-509a32d908ea","Type":"ContainerStarted","Data":"437388f38a6f449170e89ebf1f478cfca213b828b8bb251bce103cdab7bdee50"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.589490 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" event={"ID":"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa","Type":"ContainerStarted","Data":"a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.590545 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.596168 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b499844-xlhd9" event={"ID":"2d708d6b-a0b3-4add-8c98-0a33b73965ed","Type":"ContainerStarted","Data":"e4101204966aa7f6e62d693a3cce76b47266932f7145053c11a83305151c6315"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.596221 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67b499844-xlhd9" event={"ID":"2d708d6b-a0b3-4add-8c98-0a33b73965ed","Type":"ContainerStarted","Data":"1597b6372b4053ee083338b698b1e697f13c24cea037983d4ca6a49a539aa1f1"} Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.596346 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vlnf7" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.620809 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d48c87bfd-7pzgt" podStartSLOduration=2.620786384 podStartE2EDuration="2.620786384s" podCreationTimestamp="2026-02-19 19:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:00.60376654 +0000 UTC m=+1388.394432492" watchObservedRunningTime="2026-02-19 19:42:00.620786384 +0000 UTC m=+1388.411452336" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.636904 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-755bd8f67-9b2kl" podStartSLOduration=2.960356082 podStartE2EDuration="5.636880286s" podCreationTimestamp="2026-02-19 19:41:55 +0000 UTC" firstStartedPulling="2026-02-19 19:41:56.764733175 +0000 UTC m=+1384.555399117" lastFinishedPulling="2026-02-19 19:41:59.441257389 +0000 UTC m=+1387.231923321" observedRunningTime="2026-02-19 19:42:00.633022854 +0000 UTC m=+1388.423688806" watchObservedRunningTime="2026-02-19 19:42:00.636880286 +0000 UTC m=+1388.427546228" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.669022 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" podStartSLOduration=5.669002618 podStartE2EDuration="5.669002618s" podCreationTimestamp="2026-02-19 19:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:00.658096834 +0000 UTC m=+1388.448762776" watchObservedRunningTime="2026-02-19 19:42:00.669002618 +0000 UTC m=+1388.459668560" Feb 19 19:42:00 crc kubenswrapper[4787]: I0219 19:42:00.696595 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-67b499844-xlhd9" podStartSLOduration=3.060021601 podStartE2EDuration="5.696568664s" podCreationTimestamp="2026-02-19 19:41:55 +0000 UTC" firstStartedPulling="2026-02-19 19:41:56.80219488 +0000 UTC m=+1384.592860832" lastFinishedPulling="2026-02-19 19:41:59.438741953 +0000 UTC m=+1387.229407895" observedRunningTime="2026-02-19 19:42:00.675508522 +0000 UTC m=+1388.466174454" watchObservedRunningTime="2026-02-19 19:42:00.696568664 +0000 UTC m=+1388.487234616" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.029182 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:01 crc kubenswrapper[4787]: E0219 19:42:01.029985 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" containerName="cinder-db-sync" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.029998 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" containerName="cinder-db-sync" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.030260 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" containerName="cinder-db-sync" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.032811 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.038049 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.038231 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-xb2t4" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.038374 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.047066 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.151665 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.180352 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.180654 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.180741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.180869 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.180909 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.180930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wglsb\" (UniqueName: \"kubernetes.io/projected/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-kube-api-access-wglsb\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.298744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.298803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.298859 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.298928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.298953 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.298970 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wglsb\" (UniqueName: \"kubernetes.io/projected/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-kube-api-access-wglsb\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.299340 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.313044 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.316577 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.319584 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vk46m"] Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.320237 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.339959 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.353684 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wglsb\" (UniqueName: \"kubernetes.io/projected/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-kube-api-access-wglsb\") pod \"cinder-scheduler-0\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.365701 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pcz54"] Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.367578 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pcz54"] Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.367751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.401676 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.403579 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.406987 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.414701 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.421905 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511582 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511668 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwmq\" (UniqueName: \"kubernetes.io/projected/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-kube-api-access-nnwmq\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511697 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-scripts\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511721 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511740 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-config\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511765 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-logs\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511814 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511890 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz8ws\" (UniqueName: \"kubernetes.io/projected/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-kube-api-access-jz8ws\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511908 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data-custom\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.511965 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.512002 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614275 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz8ws\" (UniqueName: \"kubernetes.io/projected/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-kube-api-access-jz8ws\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614326 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data-custom\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614360 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614410 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614452 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614515 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614563 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnwmq\" (UniqueName: \"kubernetes.io/projected/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-kube-api-access-nnwmq\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-scripts\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614634 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614864 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-config\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.614902 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-logs\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.615436 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-logs\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.616685 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.619317 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.622391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.622545 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.622591 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.623137 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-config\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.627150 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-scripts\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.633355 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data-custom\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.636537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.639517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.646210 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnwmq\" (UniqueName: \"kubernetes.io/projected/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-kube-api-access-nnwmq\") pod \"cinder-api-0\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " pod="openstack/cinder-api-0" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.657741 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz8ws\" (UniqueName: \"kubernetes.io/projected/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-kube-api-access-jz8ws\") pod \"dnsmasq-dns-5c9776ccc5-pcz54\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.889206 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:01 crc kubenswrapper[4787]: I0219 19:42:01.898921 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 19:42:02 crc kubenswrapper[4787]: I0219 19:42:02.220602 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:02 crc kubenswrapper[4787]: W0219 19:42:02.227589 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec22d3c2_ac92_4d8b_a2d0_205fdd0e9f55.slice/crio-b02e302d33db50104c23658d15cb2ed4f49a8979827e56c430876ad0ae5a6416 WatchSource:0}: Error finding container b02e302d33db50104c23658d15cb2ed4f49a8979827e56c430876ad0ae5a6416: Status 404 returned error can't find the container with id b02e302d33db50104c23658d15cb2ed4f49a8979827e56c430876ad0ae5a6416 Feb 19 19:42:02 crc kubenswrapper[4787]: I0219 19:42:02.494056 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pcz54"] Feb 19 19:42:02 crc kubenswrapper[4787]: W0219 19:42:02.576100 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c04f822_c3ee_4f22_b9a7_1ad34dca0220.slice/crio-c8df70089f8578408d499bf6a13ce1a4bdc2e1a094f352de903378081bbb591f WatchSource:0}: Error finding container c8df70089f8578408d499bf6a13ce1a4bdc2e1a094f352de903378081bbb591f: Status 404 returned error can't find the container with id c8df70089f8578408d499bf6a13ce1a4bdc2e1a094f352de903378081bbb591f Feb 19 19:42:02 crc kubenswrapper[4787]: I0219 19:42:02.675923 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" event={"ID":"3c04f822-c3ee-4f22-b9a7-1ad34dca0220","Type":"ContainerStarted","Data":"c8df70089f8578408d499bf6a13ce1a4bdc2e1a094f352de903378081bbb591f"} Feb 19 19:42:02 crc kubenswrapper[4787]: I0219 19:42:02.682344 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerName="dnsmasq-dns" containerID="cri-o://a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252" gracePeriod=10 Feb 19 19:42:02 crc kubenswrapper[4787]: I0219 19:42:02.682814 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55","Type":"ContainerStarted","Data":"b02e302d33db50104c23658d15cb2ed4f49a8979827e56c430876ad0ae5a6416"} Feb 19 19:42:02 crc kubenswrapper[4787]: I0219 19:42:02.687966 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.519071 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.523032 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.606451 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-swift-storage-0\") pod \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.606526 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68xq7\" (UniqueName: \"kubernetes.io/projected/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-kube-api-access-68xq7\") pod \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.606622 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-config\") pod \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.606670 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-nb\") pod \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.606695 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-sb\") pod \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.606822 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-svc\") pod \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\" (UID: \"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa\") " Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.629910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-kube-api-access-68xq7" (OuterVolumeSpecName: "kube-api-access-68xq7") pod "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" (UID: "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa"). InnerVolumeSpecName "kube-api-access-68xq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.685445 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" (UID: "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.709932 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.709961 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68xq7\" (UniqueName: \"kubernetes.io/projected/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-kube-api-access-68xq7\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.711174 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" (UID: "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.712572 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" (UID: "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.719783 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd","Type":"ContainerStarted","Data":"03c011ba364066c36e6f07c2d1bbc172765f45efff63e31f843217a846e447f4"} Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.721057 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" (UID: "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.722196 4787 generic.go:334] "Generic (PLEG): container finished" podID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerID="0b3552000e24191580cb9823e21078b530500afc71d67610a52addf7fe81f487" exitCode=0 Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.722239 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" event={"ID":"3c04f822-c3ee-4f22-b9a7-1ad34dca0220","Type":"ContainerDied","Data":"0b3552000e24191580cb9823e21078b530500afc71d67610a52addf7fe81f487"} Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.733238 4787 generic.go:334] "Generic (PLEG): container finished" podID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerID="a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252" exitCode=0 Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.733269 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.733568 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" event={"ID":"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa","Type":"ContainerDied","Data":"a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252"} Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.733731 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vk46m" event={"ID":"973ccfa9-a545-4c4e-b1a0-e0f56dea07fa","Type":"ContainerDied","Data":"39dfeabba83b5e89e11e748e77c4989161b32bb598387be8e840fa762570a608"} Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.733761 4787 scope.go:117] "RemoveContainer" containerID="a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.751760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-config" (OuterVolumeSpecName: "config") pod "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" (UID: "973ccfa9-a545-4c4e-b1a0-e0f56dea07fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.812210 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.812252 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.812262 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.812270 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.869589 4787 scope.go:117] "RemoveContainer" containerID="5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.921645 4787 scope.go:117] "RemoveContainer" containerID="a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252" Feb 19 19:42:03 crc kubenswrapper[4787]: E0219 19:42:03.923367 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252\": container with ID starting with a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252 not found: ID does not exist" containerID="a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.923392 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252"} err="failed to get container status \"a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252\": rpc error: code = NotFound desc = could not find container \"a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252\": container with ID starting with a02f47295c8fc71397491bf13a3f69d7e5744d220636fdbdfb40627f4e093252 not found: ID does not exist" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.923410 4787 scope.go:117] "RemoveContainer" containerID="5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236" Feb 19 19:42:03 crc kubenswrapper[4787]: E0219 19:42:03.925024 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236\": container with ID starting with 5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236 not found: ID does not exist" containerID="5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236" Feb 19 19:42:03 crc kubenswrapper[4787]: I0219 19:42:03.925055 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236"} err="failed to get container status \"5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236\": rpc error: code = NotFound desc = could not find container \"5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236\": container with ID starting with 5ed8a6b28c2163da8546f250d4464761a693178cdb8d21aaa7aa59c272272236 not found: ID does not exist" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.097187 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.108400 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vk46m"] Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.131045 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vk46m"] Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.389056 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d8c48d785-rdt7v"] Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.389368 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d8c48d785-rdt7v" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-api" containerID="cri-o://7f742740268ad7d8b052e9c387c5baea0b6a65240b85cad06868ba984049e347" gracePeriod=30 Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.390693 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d8c48d785-rdt7v" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-httpd" containerID="cri-o://2bc5374b28da7c30db759a0e004f9ec9420ebe0b79447370e3f2a4dd6eb51485" gracePeriod=30 Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.475435 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c797bbccc-wln47"] Feb 19 19:42:04 crc kubenswrapper[4787]: E0219 19:42:04.476063 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerName="init" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.476088 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerName="init" Feb 19 19:42:04 crc kubenswrapper[4787]: E0219 19:42:04.476114 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerName="dnsmasq-dns" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.476123 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerName="dnsmasq-dns" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.476401 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" containerName="dnsmasq-dns" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.477858 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.493107 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-d8c48d785-rdt7v" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9696/\": EOF" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.509568 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c797bbccc-wln47"] Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553266 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-public-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553321 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-httpd-config\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553360 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwfp\" (UniqueName: \"kubernetes.io/projected/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-kube-api-access-bvwfp\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553415 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-combined-ca-bundle\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-config\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553469 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-ovndb-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.553522 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-internal-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655079 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-combined-ca-bundle\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655144 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-config\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655177 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-ovndb-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655235 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-internal-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655323 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-public-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655359 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-httpd-config\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.655390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwfp\" (UniqueName: \"kubernetes.io/projected/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-kube-api-access-bvwfp\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.665671 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-httpd-config\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.670224 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-public-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.675387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-internal-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.679128 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-config\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.679988 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-combined-ca-bundle\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.697301 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-ovndb-tls-certs\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.700598 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwfp\" (UniqueName: \"kubernetes.io/projected/9239fd31-7ea3-445a-bee3-b3c1a45f58cf-kube-api-access-bvwfp\") pod \"neutron-6c797bbccc-wln47\" (UID: \"9239fd31-7ea3-445a-bee3-b3c1a45f58cf\") " pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.778752 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55","Type":"ContainerStarted","Data":"a5f03ced1b1c87f961ebc145ad67e998d469eac66375a00dce07ccb5ea8a27b7"} Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.784331 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd","Type":"ContainerStarted","Data":"41df2aeaccb16cf4f9d90cc5a50492cee7a95b07cc3b6aff5a2f5b33514789d9"} Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.795456 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" event={"ID":"3c04f822-c3ee-4f22-b9a7-1ad34dca0220","Type":"ContainerStarted","Data":"3e767471a7e41ea35ca98aa52aa978c8979e879e75ecf04adb8f351dd6e50b50"} Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.797024 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.829880 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.844471 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" podStartSLOduration=3.844452756 podStartE2EDuration="3.844452756s" podCreationTimestamp="2026-02-19 19:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:04.824229642 +0000 UTC m=+1392.614895584" watchObservedRunningTime="2026-02-19 19:42:04.844452756 +0000 UTC m=+1392.635118698" Feb 19 19:42:04 crc kubenswrapper[4787]: I0219 19:42:04.929980 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973ccfa9-a545-4c4e-b1a0-e0f56dea07fa" path="/var/lib/kubelet/pods/973ccfa9-a545-4c4e-b1a0-e0f56dea07fa/volumes" Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.578621 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c797bbccc-wln47"] Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.810776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd","Type":"ContainerStarted","Data":"bd951572ed8f312fdf70898dd30be73d5e9bd51c71d5607d93871c1d92ba14fa"} Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.811119 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.810979 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api" containerID="cri-o://bd951572ed8f312fdf70898dd30be73d5e9bd51c71d5607d93871c1d92ba14fa" gracePeriod=30 Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.811221 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api-log" containerID="cri-o://41df2aeaccb16cf4f9d90cc5a50492cee7a95b07cc3b6aff5a2f5b33514789d9" gracePeriod=30 Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.815930 4787 generic.go:334] "Generic (PLEG): container finished" podID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerID="2bc5374b28da7c30db759a0e004f9ec9420ebe0b79447370e3f2a4dd6eb51485" exitCode=0 Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.815996 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8c48d785-rdt7v" event={"ID":"62a31ecf-6e1f-474f-99ac-aa021dca2905","Type":"ContainerDied","Data":"2bc5374b28da7c30db759a0e004f9ec9420ebe0b79447370e3f2a4dd6eb51485"} Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.817842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c797bbccc-wln47" event={"ID":"9239fd31-7ea3-445a-bee3-b3c1a45f58cf","Type":"ContainerStarted","Data":"a7fe2c15f6e9db8a4b595cfcd2ed8a498d5e8ac34caf703783abf986557d747e"} Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.820349 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55","Type":"ContainerStarted","Data":"96c773bf3959d319ad74805a687c879777245a5984acc6d1402124e85ec83bc8"} Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.847802 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.847780377 podStartE2EDuration="4.847780377s" podCreationTimestamp="2026-02-19 19:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:05.840336822 +0000 UTC m=+1393.631002764" watchObservedRunningTime="2026-02-19 19:42:05.847780377 +0000 UTC m=+1393.638446319" Feb 19 19:42:05 crc kubenswrapper[4787]: I0219 19:42:05.866474 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.145791784 podStartE2EDuration="5.866452518s" podCreationTimestamp="2026-02-19 19:42:00 +0000 UTC" firstStartedPulling="2026-02-19 19:42:02.234851418 +0000 UTC m=+1390.025517360" lastFinishedPulling="2026-02-19 19:42:02.955512152 +0000 UTC m=+1390.746178094" observedRunningTime="2026-02-19 19:42:05.862235813 +0000 UTC m=+1393.652901765" watchObservedRunningTime="2026-02-19 19:42:05.866452518 +0000 UTC m=+1393.657118450" Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.371493 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-d8c48d785-rdt7v" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9696/\": dial tcp 10.217.0.193:9696: connect: connection refused" Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.418664 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.836083 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c797bbccc-wln47" event={"ID":"9239fd31-7ea3-445a-bee3-b3c1a45f58cf","Type":"ContainerStarted","Data":"2231ef0c89227abf976914cbf42c13153ca58f4c9721c17a078754c02c7f0ba9"} Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.836421 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c797bbccc-wln47" event={"ID":"9239fd31-7ea3-445a-bee3-b3c1a45f58cf","Type":"ContainerStarted","Data":"fbff93ac21b20ddb9c992f2d81ba81d82c4798bfb497b2955077439139d190cf"} Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.838196 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.853279 4787 generic.go:334] "Generic (PLEG): container finished" podID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerID="bd951572ed8f312fdf70898dd30be73d5e9bd51c71d5607d93871c1d92ba14fa" exitCode=0 Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.853318 4787 generic.go:334] "Generic (PLEG): container finished" podID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerID="41df2aeaccb16cf4f9d90cc5a50492cee7a95b07cc3b6aff5a2f5b33514789d9" exitCode=143 Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.854433 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd","Type":"ContainerDied","Data":"bd951572ed8f312fdf70898dd30be73d5e9bd51c71d5607d93871c1d92ba14fa"} Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.854473 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd","Type":"ContainerDied","Data":"41df2aeaccb16cf4f9d90cc5a50492cee7a95b07cc3b6aff5a2f5b33514789d9"} Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.882441 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c797bbccc-wln47" podStartSLOduration=2.882417993 podStartE2EDuration="2.882417993s" podCreationTimestamp="2026-02-19 19:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:06.862293042 +0000 UTC m=+1394.652959004" watchObservedRunningTime="2026-02-19 19:42:06.882417993 +0000 UTC m=+1394.673083935" Feb 19 19:42:06 crc kubenswrapper[4787]: I0219 19:42:06.974159 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068256 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnwmq\" (UniqueName: \"kubernetes.io/projected/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-kube-api-access-nnwmq\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068348 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-etc-machine-id\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068405 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-combined-ca-bundle\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068487 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068504 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068525 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-scripts\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068649 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data-custom\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.068699 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-logs\") pod \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\" (UID: \"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd\") " Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.069392 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.069553 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-logs" (OuterVolumeSpecName: "logs") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.074359 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-kube-api-access-nnwmq" (OuterVolumeSpecName: "kube-api-access-nnwmq") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "kube-api-access-nnwmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.075625 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-scripts" (OuterVolumeSpecName: "scripts") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.078892 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.100705 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.139136 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data" (OuterVolumeSpecName: "config-data") pod "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" (UID: "a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.172120 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnwmq\" (UniqueName: \"kubernetes.io/projected/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-kube-api-access-nnwmq\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.172165 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.172178 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.172188 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.172201 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.172213 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.751894 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.865107 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd","Type":"ContainerDied","Data":"03c011ba364066c36e6f07c2d1bbc172765f45efff63e31f843217a846e447f4"} Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.865179 4787 scope.go:117] "RemoveContainer" containerID="bd951572ed8f312fdf70898dd30be73d5e9bd51c71d5607d93871c1d92ba14fa" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.865409 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.896167 4787 scope.go:117] "RemoveContainer" containerID="41df2aeaccb16cf4f9d90cc5a50492cee7a95b07cc3b6aff5a2f5b33514789d9" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.934655 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.959659 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.975160 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:07 crc kubenswrapper[4787]: E0219 19:42:07.975627 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.975641 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api" Feb 19 19:42:07 crc kubenswrapper[4787]: E0219 19:42:07.975659 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api-log" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.975665 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api-log" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.975880 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api-log" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.975892 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" containerName="cinder-api" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.977532 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.983601 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.987058 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.987161 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 19:42:07 crc kubenswrapper[4787]: I0219 19:42:07.998258 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094102 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094189 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-scripts\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094230 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094258 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094287 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-logs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094303 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094319 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-config-data-custom\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-config-data\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.094413 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87j6\" (UniqueName: \"kubernetes.io/projected/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-kube-api-access-j87j6\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196136 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-scripts\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196201 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196226 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196256 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-logs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196286 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-config-data-custom\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196327 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-config-data\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196373 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87j6\" (UniqueName: \"kubernetes.io/projected/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-kube-api-access-j87j6\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.196444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.197223 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.197664 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-logs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.201728 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-scripts\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.202318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.202559 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.202830 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.203682 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-config-data\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.204017 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-config-data-custom\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.221179 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87j6\" (UniqueName: \"kubernetes.io/projected/4d841c01-9ebe-4b54-b1c4-a8636ba01db1-kube-api-access-j87j6\") pod \"cinder-api-0\" (UID: \"4d841c01-9ebe-4b54-b1c4-a8636ba01db1\") " pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.296499 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.493573 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.914899 4787 generic.go:334] "Generic (PLEG): container finished" podID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerID="7f742740268ad7d8b052e9c387c5baea0b6a65240b85cad06868ba984049e347" exitCode=0 Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.915940 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd" path="/var/lib/kubelet/pods/a3a8e788-fdd2-4b4b-afe9-e91e9a0139cd/volumes" Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.916583 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 19:42:08 crc kubenswrapper[4787]: I0219 19:42:08.916821 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8c48d785-rdt7v" event={"ID":"62a31ecf-6e1f-474f-99ac-aa021dca2905","Type":"ContainerDied","Data":"7f742740268ad7d8b052e9c387c5baea0b6a65240b85cad06868ba984049e347"} Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.665543 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6k49" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" probeResult="failure" output=< Feb 19 19:42:09 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:42:09 crc kubenswrapper[4787]: > Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.730712 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853159 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-httpd-config\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853269 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-combined-ca-bundle\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853318 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-config\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853519 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vv82\" (UniqueName: \"kubernetes.io/projected/62a31ecf-6e1f-474f-99ac-aa021dca2905-kube-api-access-8vv82\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853655 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-public-tls-certs\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853711 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-ovndb-tls-certs\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.853796 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-internal-tls-certs\") pod \"62a31ecf-6e1f-474f-99ac-aa021dca2905\" (UID: \"62a31ecf-6e1f-474f-99ac-aa021dca2905\") " Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.879037 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.894523 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a31ecf-6e1f-474f-99ac-aa021dca2905-kube-api-access-8vv82" (OuterVolumeSpecName: "kube-api-access-8vv82") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "kube-api-access-8vv82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.941163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8c48d785-rdt7v" event={"ID":"62a31ecf-6e1f-474f-99ac-aa021dca2905","Type":"ContainerDied","Data":"767cdcba0b07305e5ef1ff6cf40a420149e94f72c2abc2f2cb5377533cffb36b"} Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.941186 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8c48d785-rdt7v" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.941253 4787 scope.go:117] "RemoveContainer" containerID="2bc5374b28da7c30db759a0e004f9ec9420ebe0b79447370e3f2a4dd6eb51485" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.945319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d841c01-9ebe-4b54-b1c4-a8636ba01db1","Type":"ContainerStarted","Data":"84c2e5df5e9a4175d94a9ca7bbddc6b49f7311469aa5fcc9514c126ac308c0ec"} Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.945359 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d841c01-9ebe-4b54-b1c4-a8636ba01db1","Type":"ContainerStarted","Data":"1ca6e1dbbe30c987588b58d47e16536ad0cf637c64fae131eae8c7fa7c9e326a"} Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.957338 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.957838 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vv82\" (UniqueName: \"kubernetes.io/projected/62a31ecf-6e1f-474f-99ac-aa021dca2905-kube-api-access-8vv82\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:09 crc kubenswrapper[4787]: I0219 19:42:09.998768 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.029156 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-config" (OuterVolumeSpecName: "config") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.038766 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.056727 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.060184 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.060214 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.060225 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.060233 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.091708 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62a31ecf-6e1f-474f-99ac-aa021dca2905" (UID: "62a31ecf-6e1f-474f-99ac-aa021dca2905"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.165392 4787 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62a31ecf-6e1f-474f-99ac-aa021dca2905-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.246531 4787 scope.go:117] "RemoveContainer" containerID="7f742740268ad7d8b052e9c387c5baea0b6a65240b85cad06868ba984049e347" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.315718 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d8c48d785-rdt7v"] Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.329669 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d8c48d785-rdt7v"] Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.542548 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.948490 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" path="/var/lib/kubelet/pods/62a31ecf-6e1f-474f-99ac-aa021dca2905/volumes" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.968801 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4d841c01-9ebe-4b54-b1c4-a8636ba01db1","Type":"ContainerStarted","Data":"57acf76c8e9d478c1ff03c961ef8ba716fb8b645d016f1096dd779b91004e77c"} Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.970525 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 19:42:10 crc kubenswrapper[4787]: I0219 19:42:10.999927 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.9999122099999997 podStartE2EDuration="3.99991221s" podCreationTimestamp="2026-02-19 19:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:10.997730986 +0000 UTC m=+1398.788396928" watchObservedRunningTime="2026-02-19 19:42:10.99991221 +0000 UTC m=+1398.790578152" Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.184572 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d48c87bfd-7pzgt" Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.266391 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d9b7d8444-dlm9r"] Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.266656 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d9b7d8444-dlm9r" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api-log" containerID="cri-o://06683d370d53595ebf3a7e94b294ca48f0ba7ee66770da19eb94eef266d08874" gracePeriod=30 Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.266729 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d9b7d8444-dlm9r" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api" containerID="cri-o://0b1ca01065887e7b42db747fa85ce04f2942a679d5099f91111d8c54d376993d" gracePeriod=30 Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.277183 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5d9b7d8444-dlm9r" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": EOF" Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.700789 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.742349 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.892772 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.980900 4787 generic.go:334] "Generic (PLEG): container finished" podID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerID="06683d370d53595ebf3a7e94b294ca48f0ba7ee66770da19eb94eef266d08874" exitCode=143 Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.981084 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d9b7d8444-dlm9r" event={"ID":"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1","Type":"ContainerDied","Data":"06683d370d53595ebf3a7e94b294ca48f0ba7ee66770da19eb94eef266d08874"} Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.981206 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="cinder-scheduler" containerID="cri-o://a5f03ced1b1c87f961ebc145ad67e998d469eac66375a00dce07ccb5ea8a27b7" gracePeriod=30 Feb 19 19:42:11 crc kubenswrapper[4787]: I0219 19:42:11.981654 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="probe" containerID="cri-o://96c773bf3959d319ad74805a687c879777245a5984acc6d1402124e85ec83bc8" gracePeriod=30 Feb 19 19:42:12 crc kubenswrapper[4787]: I0219 19:42:12.032944 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79cgv"] Feb 19 19:42:12 crc kubenswrapper[4787]: I0219 19:42:12.033446 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerName="dnsmasq-dns" containerID="cri-o://5ae52d7e615e9a07b6d20ce33046cae5660c2dee725f1f3879c848e83040b5f6" gracePeriod=10 Feb 19 19:42:12 crc kubenswrapper[4787]: I0219 19:42:12.995077 4787 generic.go:334] "Generic (PLEG): container finished" podID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerID="96c773bf3959d319ad74805a687c879777245a5984acc6d1402124e85ec83bc8" exitCode=0 Feb 19 19:42:12 crc kubenswrapper[4787]: I0219 19:42:12.995544 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55","Type":"ContainerDied","Data":"96c773bf3959d319ad74805a687c879777245a5984acc6d1402124e85ec83bc8"} Feb 19 19:42:12 crc kubenswrapper[4787]: I0219 19:42:12.997381 4787 generic.go:334] "Generic (PLEG): container finished" podID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerID="5ae52d7e615e9a07b6d20ce33046cae5660c2dee725f1f3879c848e83040b5f6" exitCode=0 Feb 19 19:42:12 crc kubenswrapper[4787]: I0219 19:42:12.999167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" event={"ID":"0eae0600-9f79-4c97-9af4-c5f2ac5de934","Type":"ContainerDied","Data":"5ae52d7e615e9a07b6d20ce33046cae5660c2dee725f1f3879c848e83040b5f6"} Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.112561 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.184995 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-svc\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.185058 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-sb\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.185115 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-nb\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.185144 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tnkg\" (UniqueName: \"kubernetes.io/projected/0eae0600-9f79-4c97-9af4-c5f2ac5de934-kube-api-access-9tnkg\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.185376 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-config\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.185461 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.209042 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eae0600-9f79-4c97-9af4-c5f2ac5de934-kube-api-access-9tnkg" (OuterVolumeSpecName: "kube-api-access-9tnkg") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "kube-api-access-9tnkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.259088 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.268122 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-config" (OuterVolumeSpecName: "config") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.287869 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.288243 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0\") pod \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\" (UID: \"0eae0600-9f79-4c97-9af4-c5f2ac5de934\") " Feb 19 19:42:13 crc kubenswrapper[4787]: W0219 19:42:13.288546 4787 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0eae0600-9f79-4c97-9af4-c5f2ac5de934/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.288579 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.288979 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tnkg\" (UniqueName: \"kubernetes.io/projected/0eae0600-9f79-4c97-9af4-c5f2ac5de934-kube-api-access-9tnkg\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.288997 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.289009 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.289020 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.296484 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.296843 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0eae0600-9f79-4c97-9af4-c5f2ac5de934" (UID: "0eae0600-9f79-4c97-9af4-c5f2ac5de934"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.391309 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:13 crc kubenswrapper[4787]: I0219 19:42:13.391377 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eae0600-9f79-4c97-9af4-c5f2ac5de934-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.012563 4787 generic.go:334] "Generic (PLEG): container finished" podID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerID="a5f03ced1b1c87f961ebc145ad67e998d469eac66375a00dce07ccb5ea8a27b7" exitCode=0 Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.012655 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55","Type":"ContainerDied","Data":"a5f03ced1b1c87f961ebc145ad67e998d469eac66375a00dce07ccb5ea8a27b7"} Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.015794 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" event={"ID":"0eae0600-9f79-4c97-9af4-c5f2ac5de934","Type":"ContainerDied","Data":"a82a58cd6f8e5b64347927897c1fa036e6ef590e31a47af06a359ac15584569a"} Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.015838 4787 scope.go:117] "RemoveContainer" containerID="5ae52d7e615e9a07b6d20ce33046cae5660c2dee725f1f3879c848e83040b5f6" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.015911 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79cgv" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.098976 4787 scope.go:117] "RemoveContainer" containerID="b8377d45bbc85c2f640dfbb024cfd3ebba75f09602ec13ee985995ff46421dc6" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.101729 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79cgv"] Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.112780 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79cgv"] Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.523353 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.629769 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-scripts\") pod \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.629838 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data-custom\") pod \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.630088 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data\") pod \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.630132 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wglsb\" (UniqueName: \"kubernetes.io/projected/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-kube-api-access-wglsb\") pod \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.630222 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-combined-ca-bundle\") pod \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.630284 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-etc-machine-id\") pod \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\" (UID: \"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55\") " Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.630522 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" (UID: "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.631275 4787 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.637097 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-scripts" (OuterVolumeSpecName: "scripts") pod "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" (UID: "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.637112 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-kube-api-access-wglsb" (OuterVolumeSpecName: "kube-api-access-wglsb") pod "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" (UID: "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55"). InnerVolumeSpecName "kube-api-access-wglsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.637181 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" (UID: "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.686502 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" (UID: "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.733089 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.733119 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.733128 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.733137 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wglsb\" (UniqueName: \"kubernetes.io/projected/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-kube-api-access-wglsb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.762834 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data" (OuterVolumeSpecName: "config-data") pod "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" (UID: "ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.834929 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:14 crc kubenswrapper[4787]: I0219 19:42:14.905314 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" path="/var/lib/kubelet/pods/0eae0600-9f79-4c97-9af4-c5f2ac5de934/volumes" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.027343 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55","Type":"ContainerDied","Data":"b02e302d33db50104c23658d15cb2ed4f49a8979827e56c430876ad0ae5a6416"} Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.027392 4787 scope.go:117] "RemoveContainer" containerID="96c773bf3959d319ad74805a687c879777245a5984acc6d1402124e85ec83bc8" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.027398 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.056860 4787 scope.go:117] "RemoveContainer" containerID="a5f03ced1b1c87f961ebc145ad67e998d469eac66375a00dce07ccb5ea8a27b7" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.058301 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.071212 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.103796 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:15 crc kubenswrapper[4787]: E0219 19:42:15.104311 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerName="init" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104328 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerName="init" Feb 19 19:42:15 crc kubenswrapper[4787]: E0219 19:42:15.104345 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="probe" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104351 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="probe" Feb 19 19:42:15 crc kubenswrapper[4787]: E0219 19:42:15.104369 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-api" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104376 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-api" Feb 19 19:42:15 crc kubenswrapper[4787]: E0219 19:42:15.104388 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerName="dnsmasq-dns" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104394 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerName="dnsmasq-dns" Feb 19 19:42:15 crc kubenswrapper[4787]: E0219 19:42:15.104400 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="cinder-scheduler" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104407 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="cinder-scheduler" Feb 19 19:42:15 crc kubenswrapper[4787]: E0219 19:42:15.104421 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-httpd" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104429 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-httpd" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104661 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eae0600-9f79-4c97-9af4-c5f2ac5de934" containerName="dnsmasq-dns" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104679 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="cinder-scheduler" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104693 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" containerName="probe" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104701 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-api" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.104711 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a31ecf-6e1f-474f-99ac-aa021dca2905" containerName="neutron-httpd" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.105875 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.108852 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.115241 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.243423 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b455\" (UniqueName: \"kubernetes.io/projected/3f4e423c-1e8b-47e3-af08-1190ee8942aa-kube-api-access-7b455\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.243695 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.243881 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.244031 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.244129 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.244329 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f4e423c-1e8b-47e3-af08-1190ee8942aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346617 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346694 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f4e423c-1e8b-47e3-af08-1190ee8942aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b455\" (UniqueName: \"kubernetes.io/projected/3f4e423c-1e8b-47e3-af08-1190ee8942aa-kube-api-access-7b455\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346814 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346846 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f4e423c-1e8b-47e3-af08-1190ee8942aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.346858 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.351365 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.353163 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.354521 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.355332 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f4e423c-1e8b-47e3-af08-1190ee8942aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.386341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b455\" (UniqueName: \"kubernetes.io/projected/3f4e423c-1e8b-47e3-af08-1190ee8942aa-kube-api-access-7b455\") pod \"cinder-scheduler-0\" (UID: \"3f4e423c-1e8b-47e3-af08-1190ee8942aa\") " pod="openstack/cinder-scheduler-0" Feb 19 19:42:15 crc kubenswrapper[4787]: I0219 19:42:15.424791 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 19:42:16 crc kubenswrapper[4787]: I0219 19:42:16.050407 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 19:42:16 crc kubenswrapper[4787]: I0219 19:42:16.071793 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f4e423c-1e8b-47e3-af08-1190ee8942aa","Type":"ContainerStarted","Data":"ce080e50a55e4b7dfa6df4babc6020d13e57a2641ca42966f25419d3ab8876bb"} Feb 19 19:42:16 crc kubenswrapper[4787]: I0219 19:42:16.752551 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d9b7d8444-dlm9r" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:52900->10.217.0.201:9311: read: connection reset by peer" Feb 19 19:42:16 crc kubenswrapper[4787]: I0219 19:42:16.752681 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d9b7d8444-dlm9r" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:52910->10.217.0.201:9311: read: connection reset by peer" Feb 19 19:42:16 crc kubenswrapper[4787]: I0219 19:42:16.911265 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55" path="/var/lib/kubelet/pods/ec22d3c2-ac92-4d8b-a2d0-205fdd0e9f55/volumes" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.060833 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.063875 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.080062 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.116031 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f4e423c-1e8b-47e3-af08-1190ee8942aa","Type":"ContainerStarted","Data":"394e0571e8f9522993a1e101cfdc0de71d9e9aa349755df6bc7cb478eccbe931"} Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.116581 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f76547854-2wh5m" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.130572 4787 generic.go:334] "Generic (PLEG): container finished" podID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerID="0b1ca01065887e7b42db747fa85ce04f2942a679d5099f91111d8c54d376993d" exitCode=0 Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.131546 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d9b7d8444-dlm9r" event={"ID":"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1","Type":"ContainerDied","Data":"0b1ca01065887e7b42db747fa85ce04f2942a679d5099f91111d8c54d376993d"} Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.242547 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85777598cb-xrn2z"] Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.383281 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.506706 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-logs\") pod \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.506818 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data\") pod \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.506938 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-combined-ca-bundle\") pod \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.507091 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data-custom\") pod \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.507125 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kdjk\" (UniqueName: \"kubernetes.io/projected/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-kube-api-access-6kdjk\") pod \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\" (UID: \"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1\") " Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.508587 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-logs" (OuterVolumeSpecName: "logs") pod "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" (UID: "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.521676 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-kube-api-access-6kdjk" (OuterVolumeSpecName: "kube-api-access-6kdjk") pod "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" (UID: "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1"). InnerVolumeSpecName "kube-api-access-6kdjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.529807 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" (UID: "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.565697 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" (UID: "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.609035 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.609257 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kdjk\" (UniqueName: \"kubernetes.io/projected/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-kube-api-access-6kdjk\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.609269 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.609277 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.618974 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data" (OuterVolumeSpecName: "config-data") pod "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" (UID: "ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.642358 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f8c6bbf7c-8r6fm" Feb 19 19:42:17 crc kubenswrapper[4787]: I0219 19:42:17.710905 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.143317 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d9b7d8444-dlm9r" event={"ID":"ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1","Type":"ContainerDied","Data":"794153e893f794cc258c72ae3de677b073fd39cb7a8c46596713795effaa7eca"} Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.143377 4787 scope.go:117] "RemoveContainer" containerID="0b1ca01065887e7b42db747fa85ce04f2942a679d5099f91111d8c54d376993d" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.143388 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d9b7d8444-dlm9r" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.148864 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f4e423c-1e8b-47e3-af08-1190ee8942aa","Type":"ContainerStarted","Data":"d8c5b6a631350f113962fc6283de216aa0c93966ceadbd10330d4a27582952ed"} Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.149031 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85777598cb-xrn2z" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-log" containerID="cri-o://653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d" gracePeriod=30 Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.149100 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-85777598cb-xrn2z" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-api" containerID="cri-o://0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de" gracePeriod=30 Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.180319 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.180300248 podStartE2EDuration="3.180300248s" podCreationTimestamp="2026-02-19 19:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:18.178245767 +0000 UTC m=+1405.968911709" watchObservedRunningTime="2026-02-19 19:42:18.180300248 +0000 UTC m=+1405.970966200" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.209900 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d9b7d8444-dlm9r"] Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.213583 4787 scope.go:117] "RemoveContainer" containerID="06683d370d53595ebf3a7e94b294ca48f0ba7ee66770da19eb94eef266d08874" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.225642 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d9b7d8444-dlm9r"] Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.905006 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" path="/var/lib/kubelet/pods/ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1/volumes" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.977345 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 19:42:18 crc kubenswrapper[4787]: E0219 19:42:18.977850 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.977881 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api" Feb 19 19:42:18 crc kubenswrapper[4787]: E0219 19:42:18.977909 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api-log" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.977916 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api-log" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.978121 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api-log" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.978147 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaa0cf1-a0de-48f7-b786-3e93c0e0fdb1" containerName="barbican-api" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.978897 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.980771 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.982187 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.982551 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-gwvfj" Feb 19 19:42:18 crc kubenswrapper[4787]: I0219 19:42:18.990103 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.141425 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a61c722-e803-4b6c-9127-e4929553f802-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.141917 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a61c722-e803-4b6c-9127-e4929553f802-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.141942 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a61c722-e803-4b6c-9127-e4929553f802-openstack-config\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.142020 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8wzg\" (UniqueName: \"kubernetes.io/projected/9a61c722-e803-4b6c-9127-e4929553f802-kube-api-access-f8wzg\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.164954 4787 generic.go:334] "Generic (PLEG): container finished" podID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerID="653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d" exitCode=143 Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.165200 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85777598cb-xrn2z" event={"ID":"ed5b42a6-c95d-49ec-afac-3d41f285ed9c","Type":"ContainerDied","Data":"653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d"} Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.244739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a61c722-e803-4b6c-9127-e4929553f802-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.244846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a61c722-e803-4b6c-9127-e4929553f802-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.244867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a61c722-e803-4b6c-9127-e4929553f802-openstack-config\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.244899 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8wzg\" (UniqueName: \"kubernetes.io/projected/9a61c722-e803-4b6c-9127-e4929553f802-kube-api-access-f8wzg\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.245768 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a61c722-e803-4b6c-9127-e4929553f802-openstack-config\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.252414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a61c722-e803-4b6c-9127-e4929553f802-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.261295 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a61c722-e803-4b6c-9127-e4929553f802-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.277449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8wzg\" (UniqueName: \"kubernetes.io/projected/9a61c722-e803-4b6c-9127-e4929553f802-kube-api-access-f8wzg\") pod \"openstackclient\" (UID: \"9a61c722-e803-4b6c-9127-e4929553f802\") " pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.294498 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.660142 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6k49" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" probeResult="failure" output=< Feb 19 19:42:19 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:42:19 crc kubenswrapper[4787]: > Feb 19 19:42:19 crc kubenswrapper[4787]: I0219 19:42:19.867562 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 19:42:20 crc kubenswrapper[4787]: I0219 19:42:20.178904 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9a61c722-e803-4b6c-9127-e4929553f802","Type":"ContainerStarted","Data":"a1e665b2154a4aafdbf09c1208d1314a646308dfb3f4f02c28efe8861bb01c17"} Feb 19 19:42:20 crc kubenswrapper[4787]: I0219 19:42:20.427261 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 19:42:20 crc kubenswrapper[4787]: I0219 19:42:20.680881 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 19:42:21 crc kubenswrapper[4787]: I0219 19:42:21.882257 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029447 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-logs\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029620 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-public-tls-certs\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029711 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-scripts\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-internal-tls-certs\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029833 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-config-data\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029955 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-combined-ca-bundle\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.029986 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54ggx\" (UniqueName: \"kubernetes.io/projected/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-kube-api-access-54ggx\") pod \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\" (UID: \"ed5b42a6-c95d-49ec-afac-3d41f285ed9c\") " Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.031888 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-logs" (OuterVolumeSpecName: "logs") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.046853 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-scripts" (OuterVolumeSpecName: "scripts") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.046952 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-kube-api-access-54ggx" (OuterVolumeSpecName: "kube-api-access-54ggx") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "kube-api-access-54ggx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.117093 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.129623 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-config-data" (OuterVolumeSpecName: "config-data") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.132502 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.132545 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54ggx\" (UniqueName: \"kubernetes.io/projected/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-kube-api-access-54ggx\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.132561 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.132590 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.132622 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.146034 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.177623 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ed5b42a6-c95d-49ec-afac-3d41f285ed9c" (UID: "ed5b42a6-c95d-49ec-afac-3d41f285ed9c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.206279 4787 generic.go:334] "Generic (PLEG): container finished" podID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerID="0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de" exitCode=0 Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.206326 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85777598cb-xrn2z" event={"ID":"ed5b42a6-c95d-49ec-afac-3d41f285ed9c","Type":"ContainerDied","Data":"0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de"} Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.206353 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85777598cb-xrn2z" event={"ID":"ed5b42a6-c95d-49ec-afac-3d41f285ed9c","Type":"ContainerDied","Data":"72410a967adadfe3ae8babf97841e9c0c6db1eccf61fa88b0be2615f9e2ca29e"} Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.206353 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85777598cb-xrn2z" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.206371 4787 scope.go:117] "RemoveContainer" containerID="0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.236145 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.236180 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed5b42a6-c95d-49ec-afac-3d41f285ed9c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.250982 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85777598cb-xrn2z"] Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.257775 4787 scope.go:117] "RemoveContainer" containerID="653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.261019 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85777598cb-xrn2z"] Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.325954 4787 scope.go:117] "RemoveContainer" containerID="0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de" Feb 19 19:42:22 crc kubenswrapper[4787]: E0219 19:42:22.326452 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de\": container with ID starting with 0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de not found: ID does not exist" containerID="0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.326489 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de"} err="failed to get container status \"0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de\": rpc error: code = NotFound desc = could not find container \"0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de\": container with ID starting with 0fe2b6125e2367bcf52e1755dba734d69133c0e465574cbf80a8650d9c8ad6de not found: ID does not exist" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.326514 4787 scope.go:117] "RemoveContainer" containerID="653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d" Feb 19 19:42:22 crc kubenswrapper[4787]: E0219 19:42:22.326786 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d\": container with ID starting with 653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d not found: ID does not exist" containerID="653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.326811 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d"} err="failed to get container status \"653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d\": rpc error: code = NotFound desc = could not find container \"653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d\": container with ID starting with 653956c6dfcba7e748bd055682a2076dee3d27bdfc921aff437254747e469f0d not found: ID does not exist" Feb 19 19:42:22 crc kubenswrapper[4787]: I0219 19:42:22.909752 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" path="/var/lib/kubelet/pods/ed5b42a6-c95d-49ec-afac-3d41f285ed9c/volumes" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.528662 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7dc655b5f9-gwjrd"] Feb 19 19:42:24 crc kubenswrapper[4787]: E0219 19:42:24.529385 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-log" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.529399 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-log" Feb 19 19:42:24 crc kubenswrapper[4787]: E0219 19:42:24.529438 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-api" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.529444 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-api" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.529676 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-log" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.529690 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5b42a6-c95d-49ec-afac-3d41f285ed9c" containerName="placement-api" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.531097 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.534846 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.534998 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.535032 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.545991 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7dc655b5f9-gwjrd"] Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.707068 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwm7c\" (UniqueName: \"kubernetes.io/projected/d0374190-eb11-435c-af6f-abd31845a33e-kube-api-access-vwm7c\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.707446 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-combined-ca-bundle\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.707579 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0374190-eb11-435c-af6f-abd31845a33e-run-httpd\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.707689 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0374190-eb11-435c-af6f-abd31845a33e-etc-swift\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.707773 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-internal-tls-certs\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.707909 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0374190-eb11-435c-af6f-abd31845a33e-log-httpd\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.708026 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-public-tls-certs\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.708138 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-config-data\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810560 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0374190-eb11-435c-af6f-abd31845a33e-log-httpd\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810665 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-public-tls-certs\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810716 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-config-data\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810766 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwm7c\" (UniqueName: \"kubernetes.io/projected/d0374190-eb11-435c-af6f-abd31845a33e-kube-api-access-vwm7c\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810843 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-combined-ca-bundle\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810916 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0374190-eb11-435c-af6f-abd31845a33e-run-httpd\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.810979 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0374190-eb11-435c-af6f-abd31845a33e-etc-swift\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.811007 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-internal-tls-certs\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.811763 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0374190-eb11-435c-af6f-abd31845a33e-run-httpd\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.814919 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0374190-eb11-435c-af6f-abd31845a33e-log-httpd\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.819088 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-public-tls-certs\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.833018 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-internal-tls-certs\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.833278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-config-data\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.833466 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d0374190-eb11-435c-af6f-abd31845a33e-etc-swift\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.846532 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0374190-eb11-435c-af6f-abd31845a33e-combined-ca-bundle\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.853358 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwm7c\" (UniqueName: \"kubernetes.io/projected/d0374190-eb11-435c-af6f-abd31845a33e-kube-api-access-vwm7c\") pod \"swift-proxy-7dc655b5f9-gwjrd\" (UID: \"d0374190-eb11-435c-af6f-abd31845a33e\") " pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:24 crc kubenswrapper[4787]: I0219 19:42:24.862937 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:25 crc kubenswrapper[4787]: I0219 19:42:25.576422 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7dc655b5f9-gwjrd"] Feb 19 19:42:25 crc kubenswrapper[4787]: W0219 19:42:25.589818 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0374190_eb11_435c_af6f_abd31845a33e.slice/crio-46e69481abfe1fb9bb35a05449b2d927fbb067c3bc52d2bc862f9fd5c249177e WatchSource:0}: Error finding container 46e69481abfe1fb9bb35a05449b2d927fbb067c3bc52d2bc862f9fd5c249177e: Status 404 returned error can't find the container with id 46e69481abfe1fb9bb35a05449b2d927fbb067c3bc52d2bc862f9fd5c249177e Feb 19 19:42:25 crc kubenswrapper[4787]: I0219 19:42:25.732846 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.268027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" event={"ID":"d0374190-eb11-435c-af6f-abd31845a33e","Type":"ContainerStarted","Data":"46e69481abfe1fb9bb35a05449b2d927fbb067c3bc52d2bc862f9fd5c249177e"} Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.723888 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-zxvrc"] Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.725431 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.747843 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zxvrc"] Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.818768 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-crv9s"] Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.822513 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.831182 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c1f3-account-create-update-2pwzh"] Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.832804 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.837441 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.843340 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-crv9s"] Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.859903 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c1f3-account-create-update-2pwzh"] Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.864284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c45f6a-d1bc-4584-9390-8b892bbbf384-operator-scripts\") pod \"nova-api-db-create-zxvrc\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.864335 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqhb4\" (UniqueName: \"kubernetes.io/projected/22c45f6a-d1bc-4584-9390-8b892bbbf384-kube-api-access-tqhb4\") pod \"nova-api-db-create-zxvrc\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.966814 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9kq\" (UniqueName: \"kubernetes.io/projected/e798ac38-ff2d-4ed7-8008-b2869613cccb-kube-api-access-hm9kq\") pod \"nova-api-c1f3-account-create-update-2pwzh\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.966923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad9422-4bf0-478a-8b71-2892fe8ba113-operator-scripts\") pod \"nova-cell0-db-create-crv9s\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.967013 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e798ac38-ff2d-4ed7-8008-b2869613cccb-operator-scripts\") pod \"nova-api-c1f3-account-create-update-2pwzh\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.967096 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c45f6a-d1bc-4584-9390-8b892bbbf384-operator-scripts\") pod \"nova-api-db-create-zxvrc\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.967125 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmk4\" (UniqueName: \"kubernetes.io/projected/7dad9422-4bf0-478a-8b71-2892fe8ba113-kube-api-access-hhmk4\") pod \"nova-cell0-db-create-crv9s\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.967147 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqhb4\" (UniqueName: \"kubernetes.io/projected/22c45f6a-d1bc-4584-9390-8b892bbbf384-kube-api-access-tqhb4\") pod \"nova-api-db-create-zxvrc\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.968111 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c45f6a-d1bc-4584-9390-8b892bbbf384-operator-scripts\") pod \"nova-api-db-create-zxvrc\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:26 crc kubenswrapper[4787]: I0219 19:42:26.985377 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqhb4\" (UniqueName: \"kubernetes.io/projected/22c45f6a-d1bc-4584-9390-8b892bbbf384-kube-api-access-tqhb4\") pod \"nova-api-db-create-zxvrc\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.018669 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mfqxg"] Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.020320 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.046687 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mfqxg"] Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.054057 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.066452 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0dd6-account-create-update-nqvtx"] Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.068015 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.072309 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhmk4\" (UniqueName: \"kubernetes.io/projected/7dad9422-4bf0-478a-8b71-2892fe8ba113-kube-api-access-hhmk4\") pod \"nova-cell0-db-create-crv9s\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.072393 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9kq\" (UniqueName: \"kubernetes.io/projected/e798ac38-ff2d-4ed7-8008-b2869613cccb-kube-api-access-hm9kq\") pod \"nova-api-c1f3-account-create-update-2pwzh\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.072475 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad9422-4bf0-478a-8b71-2892fe8ba113-operator-scripts\") pod \"nova-cell0-db-create-crv9s\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.072558 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e798ac38-ff2d-4ed7-8008-b2869613cccb-operator-scripts\") pod \"nova-api-c1f3-account-create-update-2pwzh\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.073215 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e798ac38-ff2d-4ed7-8008-b2869613cccb-operator-scripts\") pod \"nova-api-c1f3-account-create-update-2pwzh\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.073869 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.073994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad9422-4bf0-478a-8b71-2892fe8ba113-operator-scripts\") pod \"nova-cell0-db-create-crv9s\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.092361 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhmk4\" (UniqueName: \"kubernetes.io/projected/7dad9422-4bf0-478a-8b71-2892fe8ba113-kube-api-access-hhmk4\") pod \"nova-cell0-db-create-crv9s\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.101718 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0dd6-account-create-update-nqvtx"] Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.109146 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9kq\" (UniqueName: \"kubernetes.io/projected/e798ac38-ff2d-4ed7-8008-b2869613cccb-kube-api-access-hm9kq\") pod \"nova-api-c1f3-account-create-update-2pwzh\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.153168 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.167324 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.176061 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-operator-scripts\") pod \"nova-cell0-0dd6-account-create-update-nqvtx\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.176170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds5jb\" (UniqueName: \"kubernetes.io/projected/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-kube-api-access-ds5jb\") pod \"nova-cell0-0dd6-account-create-update-nqvtx\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.176217 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e17e04-4491-4b18-a420-c4f3ceaa09f3-operator-scripts\") pod \"nova-cell1-db-create-mfqxg\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.176263 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r85sq\" (UniqueName: \"kubernetes.io/projected/79e17e04-4491-4b18-a420-c4f3ceaa09f3-kube-api-access-r85sq\") pod \"nova-cell1-db-create-mfqxg\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.221902 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3e63-account-create-update-nmrrq"] Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.223521 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.231146 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.242665 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e63-account-create-update-nmrrq"] Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.279590 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r85sq\" (UniqueName: \"kubernetes.io/projected/79e17e04-4491-4b18-a420-c4f3ceaa09f3-kube-api-access-r85sq\") pod \"nova-cell1-db-create-mfqxg\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.279918 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-operator-scripts\") pod \"nova-cell0-0dd6-account-create-update-nqvtx\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.280047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds5jb\" (UniqueName: \"kubernetes.io/projected/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-kube-api-access-ds5jb\") pod \"nova-cell0-0dd6-account-create-update-nqvtx\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.280108 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e17e04-4491-4b18-a420-c4f3ceaa09f3-operator-scripts\") pod \"nova-cell1-db-create-mfqxg\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.280908 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e17e04-4491-4b18-a420-c4f3ceaa09f3-operator-scripts\") pod \"nova-cell1-db-create-mfqxg\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.293322 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-operator-scripts\") pod \"nova-cell0-0dd6-account-create-update-nqvtx\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.296548 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r85sq\" (UniqueName: \"kubernetes.io/projected/79e17e04-4491-4b18-a420-c4f3ceaa09f3-kube-api-access-r85sq\") pod \"nova-cell1-db-create-mfqxg\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.297767 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds5jb\" (UniqueName: \"kubernetes.io/projected/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-kube-api-access-ds5jb\") pod \"nova-cell0-0dd6-account-create-update-nqvtx\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.299331 4787 generic.go:334] "Generic (PLEG): container finished" podID="364fd284-971f-4143-94fa-542904ee31fb" containerID="eaf8c48ebf8b26760eb25e605559c8dfde5862db918a32ca2f959880d75b69bf" exitCode=137 Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.299374 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerDied","Data":"eaf8c48ebf8b26760eb25e605559c8dfde5862db918a32ca2f959880d75b69bf"} Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.354687 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.382701 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49123d4-eb5d-4788-a066-9ca1addf8bf6-operator-scripts\") pod \"nova-cell1-3e63-account-create-update-nmrrq\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.382960 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjq7\" (UniqueName: \"kubernetes.io/projected/a49123d4-eb5d-4788-a066-9ca1addf8bf6-kube-api-access-ljjq7\") pod \"nova-cell1-3e63-account-create-update-nmrrq\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.447133 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.485328 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49123d4-eb5d-4788-a066-9ca1addf8bf6-operator-scripts\") pod \"nova-cell1-3e63-account-create-update-nmrrq\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.485592 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjq7\" (UniqueName: \"kubernetes.io/projected/a49123d4-eb5d-4788-a066-9ca1addf8bf6-kube-api-access-ljjq7\") pod \"nova-cell1-3e63-account-create-update-nmrrq\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.486786 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49123d4-eb5d-4788-a066-9ca1addf8bf6-operator-scripts\") pod \"nova-cell1-3e63-account-create-update-nmrrq\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.506413 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjq7\" (UniqueName: \"kubernetes.io/projected/a49123d4-eb5d-4788-a066-9ca1addf8bf6-kube-api-access-ljjq7\") pod \"nova-cell1-3e63-account-create-update-nmrrq\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:27 crc kubenswrapper[4787]: I0219 19:42:27.547355 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:29 crc kubenswrapper[4787]: I0219 19:42:29.673304 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6k49" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" probeResult="failure" output=< Feb 19 19:42:29 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:42:29 crc kubenswrapper[4787]: > Feb 19 19:42:31 crc kubenswrapper[4787]: I0219 19:42:31.199046 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.185:3000/\": dial tcp 10.217.0.185:3000: connect: connection refused" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.147996 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224199 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-log-httpd\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224270 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxfj5\" (UniqueName: \"kubernetes.io/projected/364fd284-971f-4143-94fa-542904ee31fb-kube-api-access-mxfj5\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224350 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-sg-core-conf-yaml\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224429 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-run-httpd\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224482 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-combined-ca-bundle\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224571 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-config-data\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.224752 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-scripts\") pod \"364fd284-971f-4143-94fa-542904ee31fb\" (UID: \"364fd284-971f-4143-94fa-542904ee31fb\") " Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.227672 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.229592 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.229852 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364fd284-971f-4143-94fa-542904ee31fb-kube-api-access-mxfj5" (OuterVolumeSpecName: "kube-api-access-mxfj5") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "kube-api-access-mxfj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.231102 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-scripts" (OuterVolumeSpecName: "scripts") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.272802 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.307910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.327113 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.327142 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.327155 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxfj5\" (UniqueName: \"kubernetes.io/projected/364fd284-971f-4143-94fa-542904ee31fb-kube-api-access-mxfj5\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.327167 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.327176 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/364fd284-971f-4143-94fa-542904ee31fb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.327184 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.346090 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c1f3-account-create-update-2pwzh"] Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.356959 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-config-data" (OuterVolumeSpecName: "config-data") pod "364fd284-971f-4143-94fa-542904ee31fb" (UID: "364fd284-971f-4143-94fa-542904ee31fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.358095 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-crv9s"] Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.358909 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" event={"ID":"d0374190-eb11-435c-af6f-abd31845a33e","Type":"ContainerStarted","Data":"8f564c0accb4d1233024c76a32552e706816ccf074bd5c7763212599b1e42c7d"} Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.363040 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9a61c722-e803-4b6c-9127-e4929553f802","Type":"ContainerStarted","Data":"b0410b5085412a0ea169c2d6edd6c30f6e73f94f5975ffdb0435be886c7f3f02"} Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.382435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"364fd284-971f-4143-94fa-542904ee31fb","Type":"ContainerDied","Data":"1a77c2480f738a99a588846a88589dfd3942d0bf3523bf6adbefc8b61b2893e8"} Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.382490 4787 scope.go:117] "RemoveContainer" containerID="eaf8c48ebf8b26760eb25e605559c8dfde5862db918a32ca2f959880d75b69bf" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.382987 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.403723 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.3908843969999998 podStartE2EDuration="15.403704037s" podCreationTimestamp="2026-02-19 19:42:18 +0000 UTC" firstStartedPulling="2026-02-19 19:42:19.88174161 +0000 UTC m=+1407.672407542" lastFinishedPulling="2026-02-19 19:42:32.89456124 +0000 UTC m=+1420.685227182" observedRunningTime="2026-02-19 19:42:33.376627858 +0000 UTC m=+1421.167293800" watchObservedRunningTime="2026-02-19 19:42:33.403704037 +0000 UTC m=+1421.194369969" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.429183 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/364fd284-971f-4143-94fa-542904ee31fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.504289 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.507519 4787 scope.go:117] "RemoveContainer" containerID="040d66b1bc3b8803ac6e7143d2a753aae2b261a653f846aaa70df189cb99ccad" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.529373 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.554236 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:33 crc kubenswrapper[4787]: E0219 19:42:33.556120 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="ceilometer-notification-agent" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.556144 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="ceilometer-notification-agent" Feb 19 19:42:33 crc kubenswrapper[4787]: E0219 19:42:33.556163 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="sg-core" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.556170 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="sg-core" Feb 19 19:42:33 crc kubenswrapper[4787]: E0219 19:42:33.556196 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="proxy-httpd" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.556201 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="proxy-httpd" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.556571 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="ceilometer-notification-agent" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.556598 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="sg-core" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.556624 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="364fd284-971f-4143-94fa-542904ee31fb" containerName="proxy-httpd" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.559779 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.564509 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.565099 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.565223 4787 scope.go:117] "RemoveContainer" containerID="aa99fdb689a7fdeeee3b5b7a5f1a775ca3ab107fd54347c9c50ed4639ec382ae" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.565350 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.578587 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mfqxg"] Feb 19 19:42:33 crc kubenswrapper[4787]: W0219 19:42:33.586134 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e17e04_4491_4b18_a420_c4f3ceaa09f3.slice/crio-49587d0dfbc9ef74ef5aecbf27111c449dcfbeb31850cef5e3dbf0093bd076bd WatchSource:0}: Error finding container 49587d0dfbc9ef74ef5aecbf27111c449dcfbeb31850cef5e3dbf0093bd076bd: Status 404 returned error can't find the container with id 49587d0dfbc9ef74ef5aecbf27111c449dcfbeb31850cef5e3dbf0093bd076bd Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.632653 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-scripts\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.632704 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-log-httpd\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.632897 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-config-data\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.633004 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.633071 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9zn\" (UniqueName: \"kubernetes.io/projected/473362dd-40b3-4044-afee-350482d8d71d-kube-api-access-9d9zn\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.633117 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.633235 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-run-httpd\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735243 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d9zn\" (UniqueName: \"kubernetes.io/projected/473362dd-40b3-4044-afee-350482d8d71d-kube-api-access-9d9zn\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735280 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735335 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-run-httpd\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-scripts\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735480 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-log-httpd\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735538 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-config-data\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.735574 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.740163 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.740647 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-log-httpd\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.742888 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-scripts\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.743155 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-run-httpd\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.745082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-config-data\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.746517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.753462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d9zn\" (UniqueName: \"kubernetes.io/projected/473362dd-40b3-4044-afee-350482d8d71d-kube-api-access-9d9zn\") pod \"ceilometer-0\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.897193 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.970820 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0dd6-account-create-update-nqvtx"] Feb 19 19:42:33 crc kubenswrapper[4787]: W0219 19:42:33.981327 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc57fd5d_2a99_4ef3_b706_bfa09e570c5f.slice/crio-8ea2170dba7c2cf95f54c0d9bccfd7f2eca99b4eddb873b15f4cc9502c4939c0 WatchSource:0}: Error finding container 8ea2170dba7c2cf95f54c0d9bccfd7f2eca99b4eddb873b15f4cc9502c4939c0: Status 404 returned error can't find the container with id 8ea2170dba7c2cf95f54c0d9bccfd7f2eca99b4eddb873b15f4cc9502c4939c0 Feb 19 19:42:33 crc kubenswrapper[4787]: I0219 19:42:33.983180 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3e63-account-create-update-nmrrq"] Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:33.998706 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-zxvrc"] Feb 19 19:42:34 crc kubenswrapper[4787]: W0219 19:42:34.002560 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49123d4_eb5d_4788_a066_9ca1addf8bf6.slice/crio-3a37fd4a5d6d635128f1c9da77630178581db029e451e896c656e7605078e0bd WatchSource:0}: Error finding container 3a37fd4a5d6d635128f1c9da77630178581db029e451e896c656e7605078e0bd: Status 404 returned error can't find the container with id 3a37fd4a5d6d635128f1c9da77630178581db029e451e896c656e7605078e0bd Feb 19 19:42:34 crc kubenswrapper[4787]: W0219 19:42:34.025139 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22c45f6a_d1bc_4584_9390_8b892bbbf384.slice/crio-46b8f016e47fe4e89c5debfb9dc8d682c5fc173899995b1f7c9470426be9eab7 WatchSource:0}: Error finding container 46b8f016e47fe4e89c5debfb9dc8d682c5fc173899995b1f7c9470426be9eab7: Status 404 returned error can't find the container with id 46b8f016e47fe4e89c5debfb9dc8d682c5fc173899995b1f7c9470426be9eab7 Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.396391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxvrc" event={"ID":"22c45f6a-d1bc-4584-9390-8b892bbbf384","Type":"ContainerStarted","Data":"46b8f016e47fe4e89c5debfb9dc8d682c5fc173899995b1f7c9470426be9eab7"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.410586 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" event={"ID":"d0374190-eb11-435c-af6f-abd31845a33e","Type":"ContainerStarted","Data":"99b0a155d9fc4ea2ad107fafdec1704cffc6768afae48ad0946865f81092e663"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.410943 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.411076 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.421698 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfqxg" event={"ID":"79e17e04-4491-4b18-a420-c4f3ceaa09f3","Type":"ContainerStarted","Data":"1515a021c019786f4ca25f4e5d048e65a564e127bdfc4d4fc639420829b29b60"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.421746 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfqxg" event={"ID":"79e17e04-4491-4b18-a420-c4f3ceaa09f3","Type":"ContainerStarted","Data":"49587d0dfbc9ef74ef5aecbf27111c449dcfbeb31850cef5e3dbf0093bd076bd"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.429144 4787 generic.go:334] "Generic (PLEG): container finished" podID="e798ac38-ff2d-4ed7-8008-b2869613cccb" containerID="25e6bb95e5ac41fe15e6482772a41f1aa9f471e7b4c2829e2e6ac03d65963d9e" exitCode=0 Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.429274 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" event={"ID":"e798ac38-ff2d-4ed7-8008-b2869613cccb","Type":"ContainerDied","Data":"25e6bb95e5ac41fe15e6482772a41f1aa9f471e7b4c2829e2e6ac03d65963d9e"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.429316 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" event={"ID":"e798ac38-ff2d-4ed7-8008-b2869613cccb","Type":"ContainerStarted","Data":"0a99d3a12e5c01e5b958373f8f3c05822efec7e629db30042d0aeed7d15bcdcf"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.433623 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" event={"ID":"a49123d4-eb5d-4788-a066-9ca1addf8bf6","Type":"ContainerStarted","Data":"3a37fd4a5d6d635128f1c9da77630178581db029e451e896c656e7605078e0bd"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.435221 4787 generic.go:334] "Generic (PLEG): container finished" podID="7dad9422-4bf0-478a-8b71-2892fe8ba113" containerID="c47491b48414203e976c1ffde0ba8604b69d58319bfe947894771c5d7ed7d9c0" exitCode=0 Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.436074 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crv9s" event={"ID":"7dad9422-4bf0-478a-8b71-2892fe8ba113","Type":"ContainerDied","Data":"c47491b48414203e976c1ffde0ba8604b69d58319bfe947894771c5d7ed7d9c0"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.436103 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crv9s" event={"ID":"7dad9422-4bf0-478a-8b71-2892fe8ba113","Type":"ContainerStarted","Data":"b4dca68e4b71a87a7b6b9587ad1035c1501ef61042c93aeaf3bd7767ef1437d8"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.437832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" event={"ID":"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f","Type":"ContainerStarted","Data":"8ea2170dba7c2cf95f54c0d9bccfd7f2eca99b4eddb873b15f4cc9502c4939c0"} Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.512415 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" podStartSLOduration=10.512391873 podStartE2EDuration="10.512391873s" podCreationTimestamp="2026-02-19 19:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:34.436435998 +0000 UTC m=+1422.227101940" watchObservedRunningTime="2026-02-19 19:42:34.512391873 +0000 UTC m=+1422.303057815" Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.522022 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-mfqxg" podStartSLOduration=8.522003103 podStartE2EDuration="8.522003103s" podCreationTimestamp="2026-02-19 19:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:34.480112366 +0000 UTC m=+1422.270778328" watchObservedRunningTime="2026-02-19 19:42:34.522003103 +0000 UTC m=+1422.312669045" Feb 19 19:42:34 crc kubenswrapper[4787]: W0219 19:42:34.611673 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod473362dd_40b3_4044_afee_350482d8d71d.slice/crio-6aa7928a27675db4f8471121da75d99cca35a1dba4105d5703563ffed59de829 WatchSource:0}: Error finding container 6aa7928a27675db4f8471121da75d99cca35a1dba4105d5703563ffed59de829: Status 404 returned error can't find the container with id 6aa7928a27675db4f8471121da75d99cca35a1dba4105d5703563ffed59de829 Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.612165 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.846966 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c797bbccc-wln47" Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.905511 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364fd284-971f-4143-94fa-542904ee31fb" path="/var/lib/kubelet/pods/364fd284-971f-4143-94fa-542904ee31fb/volumes" Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.934914 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8549458c84-mvvtk"] Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.935134 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8549458c84-mvvtk" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-api" containerID="cri-o://127ce436378f533333eed531ffbbd97b13d94c9257292dfe3e69c526ce5b1ab0" gracePeriod=30 Feb 19 19:42:34 crc kubenswrapper[4787]: I0219 19:42:34.935283 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8549458c84-mvvtk" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-httpd" containerID="cri-o://96811f62ee8a14b2b827bbee843394ae893bb89375412557884b28626eeeb1cf" gracePeriod=30 Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.472446 4787 generic.go:334] "Generic (PLEG): container finished" podID="22c45f6a-d1bc-4584-9390-8b892bbbf384" containerID="20c97dc13e4384cac8d2f59d9eb73ffef478b18a6f46357f3d0fa07ea5ac91bf" exitCode=0 Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.472855 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxvrc" event={"ID":"22c45f6a-d1bc-4584-9390-8b892bbbf384","Type":"ContainerDied","Data":"20c97dc13e4384cac8d2f59d9eb73ffef478b18a6f46357f3d0fa07ea5ac91bf"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.477454 4787 generic.go:334] "Generic (PLEG): container finished" podID="79e17e04-4491-4b18-a420-c4f3ceaa09f3" containerID="1515a021c019786f4ca25f4e5d048e65a564e127bdfc4d4fc639420829b29b60" exitCode=0 Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.477513 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfqxg" event={"ID":"79e17e04-4491-4b18-a420-c4f3ceaa09f3","Type":"ContainerDied","Data":"1515a021c019786f4ca25f4e5d048e65a564e127bdfc4d4fc639420829b29b60"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.488069 4787 generic.go:334] "Generic (PLEG): container finished" podID="a49123d4-eb5d-4788-a066-9ca1addf8bf6" containerID="04f7a93e31587a111c58dedb13c8ae0228b86726df38b6eb289c0638b92d5c75" exitCode=0 Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.488212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" event={"ID":"a49123d4-eb5d-4788-a066-9ca1addf8bf6","Type":"ContainerDied","Data":"04f7a93e31587a111c58dedb13c8ae0228b86726df38b6eb289c0638b92d5c75"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.495885 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerStarted","Data":"ef05c65af8b89c300162675ad69f1b96fcfb66e2b584a833182454318e094a50"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.495938 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerStarted","Data":"6aa7928a27675db4f8471121da75d99cca35a1dba4105d5703563ffed59de829"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.510091 4787 generic.go:334] "Generic (PLEG): container finished" podID="cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" containerID="f4de49a172364230b49d28ed091919721c6a711613d49d8c091b70003f6b8ca7" exitCode=0 Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.510149 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" event={"ID":"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f","Type":"ContainerDied","Data":"f4de49a172364230b49d28ed091919721c6a711613d49d8c091b70003f6b8ca7"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.511961 4787 generic.go:334] "Generic (PLEG): container finished" podID="748246f5-7418-4111-b5bb-599732020b19" containerID="96811f62ee8a14b2b827bbee843394ae893bb89375412557884b28626eeeb1cf" exitCode=0 Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.512876 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8549458c84-mvvtk" event={"ID":"748246f5-7418-4111-b5bb-599732020b19","Type":"ContainerDied","Data":"96811f62ee8a14b2b827bbee843394ae893bb89375412557884b28626eeeb1cf"} Feb 19 19:42:35 crc kubenswrapper[4787]: I0219 19:42:35.513268 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.233621 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.240281 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.327660 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e798ac38-ff2d-4ed7-8008-b2869613cccb-operator-scripts\") pod \"e798ac38-ff2d-4ed7-8008-b2869613cccb\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.327837 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhmk4\" (UniqueName: \"kubernetes.io/projected/7dad9422-4bf0-478a-8b71-2892fe8ba113-kube-api-access-hhmk4\") pod \"7dad9422-4bf0-478a-8b71-2892fe8ba113\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.327884 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad9422-4bf0-478a-8b71-2892fe8ba113-operator-scripts\") pod \"7dad9422-4bf0-478a-8b71-2892fe8ba113\" (UID: \"7dad9422-4bf0-478a-8b71-2892fe8ba113\") " Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.327911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9kq\" (UniqueName: \"kubernetes.io/projected/e798ac38-ff2d-4ed7-8008-b2869613cccb-kube-api-access-hm9kq\") pod \"e798ac38-ff2d-4ed7-8008-b2869613cccb\" (UID: \"e798ac38-ff2d-4ed7-8008-b2869613cccb\") " Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.328652 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dad9422-4bf0-478a-8b71-2892fe8ba113-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dad9422-4bf0-478a-8b71-2892fe8ba113" (UID: "7dad9422-4bf0-478a-8b71-2892fe8ba113"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.328794 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e798ac38-ff2d-4ed7-8008-b2869613cccb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e798ac38-ff2d-4ed7-8008-b2869613cccb" (UID: "e798ac38-ff2d-4ed7-8008-b2869613cccb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.329622 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e798ac38-ff2d-4ed7-8008-b2869613cccb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.329649 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dad9422-4bf0-478a-8b71-2892fe8ba113-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.333865 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dad9422-4bf0-478a-8b71-2892fe8ba113-kube-api-access-hhmk4" (OuterVolumeSpecName: "kube-api-access-hhmk4") pod "7dad9422-4bf0-478a-8b71-2892fe8ba113" (UID: "7dad9422-4bf0-478a-8b71-2892fe8ba113"). InnerVolumeSpecName "kube-api-access-hhmk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.337218 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e798ac38-ff2d-4ed7-8008-b2869613cccb-kube-api-access-hm9kq" (OuterVolumeSpecName: "kube-api-access-hm9kq") pod "e798ac38-ff2d-4ed7-8008-b2869613cccb" (UID: "e798ac38-ff2d-4ed7-8008-b2869613cccb"). InnerVolumeSpecName "kube-api-access-hm9kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.433975 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhmk4\" (UniqueName: \"kubernetes.io/projected/7dad9422-4bf0-478a-8b71-2892fe8ba113-kube-api-access-hhmk4\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.434009 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm9kq\" (UniqueName: \"kubernetes.io/projected/e798ac38-ff2d-4ed7-8008-b2869613cccb-kube-api-access-hm9kq\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.523269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerStarted","Data":"7db0d4879dbf46df1e3eebe19d9243e12faefeee2673963e808755605369381b"} Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.524847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" event={"ID":"e798ac38-ff2d-4ed7-8008-b2869613cccb","Type":"ContainerDied","Data":"0a99d3a12e5c01e5b958373f8f3c05822efec7e629db30042d0aeed7d15bcdcf"} Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.524892 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a99d3a12e5c01e5b958373f8f3c05822efec7e629db30042d0aeed7d15bcdcf" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.524890 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c1f3-account-create-update-2pwzh" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.526989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-crv9s" event={"ID":"7dad9422-4bf0-478a-8b71-2892fe8ba113","Type":"ContainerDied","Data":"b4dca68e4b71a87a7b6b9587ad1035c1501ef61042c93aeaf3bd7767ef1437d8"} Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.527049 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4dca68e4b71a87a7b6b9587ad1035c1501ef61042c93aeaf3bd7767ef1437d8" Feb 19 19:42:36 crc kubenswrapper[4787]: I0219 19:42:36.527100 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-crv9s" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.113973 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.265495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r85sq\" (UniqueName: \"kubernetes.io/projected/79e17e04-4491-4b18-a420-c4f3ceaa09f3-kube-api-access-r85sq\") pod \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.265982 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e17e04-4491-4b18-a420-c4f3ceaa09f3-operator-scripts\") pod \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\" (UID: \"79e17e04-4491-4b18-a420-c4f3ceaa09f3\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.269597 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e17e04-4491-4b18-a420-c4f3ceaa09f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79e17e04-4491-4b18-a420-c4f3ceaa09f3" (UID: "79e17e04-4491-4b18-a420-c4f3ceaa09f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.282872 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e17e04-4491-4b18-a420-c4f3ceaa09f3-kube-api-access-r85sq" (OuterVolumeSpecName: "kube-api-access-r85sq") pod "79e17e04-4491-4b18-a420-c4f3ceaa09f3" (UID: "79e17e04-4491-4b18-a420-c4f3ceaa09f3"). InnerVolumeSpecName "kube-api-access-r85sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.382365 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e17e04-4491-4b18-a420-c4f3ceaa09f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.382859 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r85sq\" (UniqueName: \"kubernetes.io/projected/79e17e04-4491-4b18-a420-c4f3ceaa09f3-kube-api-access-r85sq\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.426936 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.455973 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.470994 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.587350 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49123d4-eb5d-4788-a066-9ca1addf8bf6-operator-scripts\") pod \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.587434 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c45f6a-d1bc-4584-9390-8b892bbbf384-operator-scripts\") pod \"22c45f6a-d1bc-4584-9390-8b892bbbf384\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.587535 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqhb4\" (UniqueName: \"kubernetes.io/projected/22c45f6a-d1bc-4584-9390-8b892bbbf384-kube-api-access-tqhb4\") pod \"22c45f6a-d1bc-4584-9390-8b892bbbf384\" (UID: \"22c45f6a-d1bc-4584-9390-8b892bbbf384\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.587585 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-operator-scripts\") pod \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.588324 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds5jb\" (UniqueName: \"kubernetes.io/projected/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-kube-api-access-ds5jb\") pod \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\" (UID: \"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.588403 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljjq7\" (UniqueName: \"kubernetes.io/projected/a49123d4-eb5d-4788-a066-9ca1addf8bf6-kube-api-access-ljjq7\") pod \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\" (UID: \"a49123d4-eb5d-4788-a066-9ca1addf8bf6\") " Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.593166 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" (UID: "cc57fd5d-2a99-4ef3-b706-bfa09e570c5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.593418 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49123d4-eb5d-4788-a066-9ca1addf8bf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a49123d4-eb5d-4788-a066-9ca1addf8bf6" (UID: "a49123d4-eb5d-4788-a066-9ca1addf8bf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.593703 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c45f6a-d1bc-4584-9390-8b892bbbf384-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22c45f6a-d1bc-4584-9390-8b892bbbf384" (UID: "22c45f6a-d1bc-4584-9390-8b892bbbf384"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.599472 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49123d4-eb5d-4788-a066-9ca1addf8bf6-kube-api-access-ljjq7" (OuterVolumeSpecName: "kube-api-access-ljjq7") pod "a49123d4-eb5d-4788-a066-9ca1addf8bf6" (UID: "a49123d4-eb5d-4788-a066-9ca1addf8bf6"). InnerVolumeSpecName "kube-api-access-ljjq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.603729 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c45f6a-d1bc-4584-9390-8b892bbbf384-kube-api-access-tqhb4" (OuterVolumeSpecName: "kube-api-access-tqhb4") pod "22c45f6a-d1bc-4584-9390-8b892bbbf384" (UID: "22c45f6a-d1bc-4584-9390-8b892bbbf384"). InnerVolumeSpecName "kube-api-access-tqhb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.620842 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-kube-api-access-ds5jb" (OuterVolumeSpecName: "kube-api-access-ds5jb") pod "cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" (UID: "cc57fd5d-2a99-4ef3-b706-bfa09e570c5f"). InnerVolumeSpecName "kube-api-access-ds5jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.627850 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerStarted","Data":"901ec92c5a513ba6e9efa10c261a83d2c7fc16d115735ecae610aee037c4a01a"} Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.668157 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" event={"ID":"cc57fd5d-2a99-4ef3-b706-bfa09e570c5f","Type":"ContainerDied","Data":"8ea2170dba7c2cf95f54c0d9bccfd7f2eca99b4eddb873b15f4cc9502c4939c0"} Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.668221 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea2170dba7c2cf95f54c0d9bccfd7f2eca99b4eddb873b15f4cc9502c4939c0" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.668315 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0dd6-account-create-update-nqvtx" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.686054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-zxvrc" event={"ID":"22c45f6a-d1bc-4584-9390-8b892bbbf384","Type":"ContainerDied","Data":"46b8f016e47fe4e89c5debfb9dc8d682c5fc173899995b1f7c9470426be9eab7"} Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.686091 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b8f016e47fe4e89c5debfb9dc8d682c5fc173899995b1f7c9470426be9eab7" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.686165 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-zxvrc" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.692416 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqhb4\" (UniqueName: \"kubernetes.io/projected/22c45f6a-d1bc-4584-9390-8b892bbbf384-kube-api-access-tqhb4\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.692454 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.692465 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds5jb\" (UniqueName: \"kubernetes.io/projected/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f-kube-api-access-ds5jb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.692475 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljjq7\" (UniqueName: \"kubernetes.io/projected/a49123d4-eb5d-4788-a066-9ca1addf8bf6-kube-api-access-ljjq7\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.692483 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a49123d4-eb5d-4788-a066-9ca1addf8bf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.692496 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22c45f6a-d1bc-4584-9390-8b892bbbf384-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.696772 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mfqxg" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.699660 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mfqxg" event={"ID":"79e17e04-4491-4b18-a420-c4f3ceaa09f3","Type":"ContainerDied","Data":"49587d0dfbc9ef74ef5aecbf27111c449dcfbeb31850cef5e3dbf0093bd076bd"} Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.699693 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49587d0dfbc9ef74ef5aecbf27111c449dcfbeb31850cef5e3dbf0093bd076bd" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.711903 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" event={"ID":"a49123d4-eb5d-4788-a066-9ca1addf8bf6","Type":"ContainerDied","Data":"3a37fd4a5d6d635128f1c9da77630178581db029e451e896c656e7605078e0bd"} Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.711940 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a37fd4a5d6d635128f1c9da77630178581db029e451e896c656e7605078e0bd" Feb 19 19:42:37 crc kubenswrapper[4787]: I0219 19:42:37.711993 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3e63-account-create-update-nmrrq" Feb 19 19:42:37 crc kubenswrapper[4787]: E0219 19:42:37.915376 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc57fd5d_2a99_4ef3_b706_bfa09e570c5f.slice\": RecentStats: unable to find data in memory cache]" Feb 19 19:42:38 crc kubenswrapper[4787]: I0219 19:42:38.729935 4787 generic.go:334] "Generic (PLEG): container finished" podID="748246f5-7418-4111-b5bb-599732020b19" containerID="127ce436378f533333eed531ffbbd97b13d94c9257292dfe3e69c526ce5b1ab0" exitCode=0 Feb 19 19:42:38 crc kubenswrapper[4787]: I0219 19:42:38.730222 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8549458c84-mvvtk" event={"ID":"748246f5-7418-4111-b5bb-599732020b19","Type":"ContainerDied","Data":"127ce436378f533333eed531ffbbd97b13d94c9257292dfe3e69c526ce5b1ab0"} Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.058882 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.226171 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-config\") pod \"748246f5-7418-4111-b5bb-599732020b19\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.226341 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-combined-ca-bundle\") pod \"748246f5-7418-4111-b5bb-599732020b19\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.226501 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-httpd-config\") pod \"748246f5-7418-4111-b5bb-599732020b19\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.226558 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-ovndb-tls-certs\") pod \"748246f5-7418-4111-b5bb-599732020b19\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.226596 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pckxh\" (UniqueName: \"kubernetes.io/projected/748246f5-7418-4111-b5bb-599732020b19-kube-api-access-pckxh\") pod \"748246f5-7418-4111-b5bb-599732020b19\" (UID: \"748246f5-7418-4111-b5bb-599732020b19\") " Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.231906 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "748246f5-7418-4111-b5bb-599732020b19" (UID: "748246f5-7418-4111-b5bb-599732020b19"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.232864 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748246f5-7418-4111-b5bb-599732020b19-kube-api-access-pckxh" (OuterVolumeSpecName: "kube-api-access-pckxh") pod "748246f5-7418-4111-b5bb-599732020b19" (UID: "748246f5-7418-4111-b5bb-599732020b19"). InnerVolumeSpecName "kube-api-access-pckxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.263869 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.264222 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.295758 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "748246f5-7418-4111-b5bb-599732020b19" (UID: "748246f5-7418-4111-b5bb-599732020b19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.308866 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-config" (OuterVolumeSpecName: "config") pod "748246f5-7418-4111-b5bb-599732020b19" (UID: "748246f5-7418-4111-b5bb-599732020b19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.330016 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.330053 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.330066 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.330076 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pckxh\" (UniqueName: \"kubernetes.io/projected/748246f5-7418-4111-b5bb-599732020b19-kube-api-access-pckxh\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.346492 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "748246f5-7418-4111-b5bb-599732020b19" (UID: "748246f5-7418-4111-b5bb-599732020b19"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.432501 4787 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/748246f5-7418-4111-b5bb-599732020b19-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.741149 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6k49" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" probeResult="failure" output=< Feb 19 19:42:39 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:42:39 crc kubenswrapper[4787]: > Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.741374 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8549458c84-mvvtk" event={"ID":"748246f5-7418-4111-b5bb-599732020b19","Type":"ContainerDied","Data":"88f9f5119d324f9c0ec2542c04470c7f8e1b4ba9244fedadcbbac07904c5c5fb"} Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.741414 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8549458c84-mvvtk" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.741432 4787 scope.go:117] "RemoveContainer" containerID="96811f62ee8a14b2b827bbee843394ae893bb89375412557884b28626eeeb1cf" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.744407 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerStarted","Data":"24932b80b6180589a6091c22c3fc19891c39c9b71cf53c6d9a163204e319e37d"} Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.744565 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-central-agent" containerID="cri-o://ef05c65af8b89c300162675ad69f1b96fcfb66e2b584a833182454318e094a50" gracePeriod=30 Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.744599 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.744652 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="sg-core" containerID="cri-o://901ec92c5a513ba6e9efa10c261a83d2c7fc16d115735ecae610aee037c4a01a" gracePeriod=30 Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.744686 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="proxy-httpd" containerID="cri-o://24932b80b6180589a6091c22c3fc19891c39c9b71cf53c6d9a163204e319e37d" gracePeriod=30 Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.744778 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-notification-agent" containerID="cri-o://7db0d4879dbf46df1e3eebe19d9243e12faefeee2673963e808755605369381b" gracePeriod=30 Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.772621 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.70972152 podStartE2EDuration="6.772589174s" podCreationTimestamp="2026-02-19 19:42:33 +0000 UTC" firstStartedPulling="2026-02-19 19:42:34.613952908 +0000 UTC m=+1422.404618850" lastFinishedPulling="2026-02-19 19:42:38.676820562 +0000 UTC m=+1426.467486504" observedRunningTime="2026-02-19 19:42:39.768949639 +0000 UTC m=+1427.559615581" watchObservedRunningTime="2026-02-19 19:42:39.772589174 +0000 UTC m=+1427.563255116" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.801892 4787 scope.go:117] "RemoveContainer" containerID="127ce436378f533333eed531ffbbd97b13d94c9257292dfe3e69c526ce5b1ab0" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.835688 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8549458c84-mvvtk"] Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.842830 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8549458c84-mvvtk"] Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.873796 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:39 crc kubenswrapper[4787]: I0219 19:42:39.876832 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7dc655b5f9-gwjrd" Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.758489 4787 generic.go:334] "Generic (PLEG): container finished" podID="473362dd-40b3-4044-afee-350482d8d71d" containerID="24932b80b6180589a6091c22c3fc19891c39c9b71cf53c6d9a163204e319e37d" exitCode=0 Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.758544 4787 generic.go:334] "Generic (PLEG): container finished" podID="473362dd-40b3-4044-afee-350482d8d71d" containerID="901ec92c5a513ba6e9efa10c261a83d2c7fc16d115735ecae610aee037c4a01a" exitCode=2 Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.758552 4787 generic.go:334] "Generic (PLEG): container finished" podID="473362dd-40b3-4044-afee-350482d8d71d" containerID="7db0d4879dbf46df1e3eebe19d9243e12faefeee2673963e808755605369381b" exitCode=0 Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.758557 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerDied","Data":"24932b80b6180589a6091c22c3fc19891c39c9b71cf53c6d9a163204e319e37d"} Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.758632 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerDied","Data":"901ec92c5a513ba6e9efa10c261a83d2c7fc16d115735ecae610aee037c4a01a"} Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.758649 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerDied","Data":"7db0d4879dbf46df1e3eebe19d9243e12faefeee2673963e808755605369381b"} Feb 19 19:42:40 crc kubenswrapper[4787]: I0219 19:42:40.906997 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748246f5-7418-4111-b5bb-599732020b19" path="/var/lib/kubelet/pods/748246f5-7418-4111-b5bb-599732020b19/volumes" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.459905 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-9749b6cff-95bk5"] Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460658 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-httpd" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460674 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-httpd" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460686 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c45f6a-d1bc-4584-9390-8b892bbbf384" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460692 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c45f6a-d1bc-4584-9390-8b892bbbf384" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460711 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e798ac38-ff2d-4ed7-8008-b2869613cccb" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460717 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e798ac38-ff2d-4ed7-8008-b2869613cccb" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460741 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49123d4-eb5d-4788-a066-9ca1addf8bf6" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460746 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49123d4-eb5d-4788-a066-9ca1addf8bf6" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460756 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460762 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460774 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e17e04-4491-4b18-a420-c4f3ceaa09f3" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460780 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e17e04-4491-4b18-a420-c4f3ceaa09f3" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460790 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dad9422-4bf0-478a-8b71-2892fe8ba113" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460795 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dad9422-4bf0-478a-8b71-2892fe8ba113" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: E0219 19:42:41.460835 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-api" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.460842 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-api" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461031 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e798ac38-ff2d-4ed7-8008-b2869613cccb" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461046 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dad9422-4bf0-478a-8b71-2892fe8ba113" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461056 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c45f6a-d1bc-4584-9390-8b892bbbf384" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461068 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461074 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49123d4-eb5d-4788-a066-9ca1addf8bf6" containerName="mariadb-account-create-update" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461086 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e17e04-4491-4b18-a420-c4f3ceaa09f3" containerName="mariadb-database-create" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461099 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-httpd" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461114 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="748246f5-7418-4111-b5bb-599732020b19" containerName="neutron-api" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.461853 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.463954 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-xrtpd" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.479968 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.480145 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.488552 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data-custom\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.488618 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptkfl\" (UniqueName: \"kubernetes.io/projected/5c223cd7-3eba-4f11-8f82-5ef5a479daee-kube-api-access-ptkfl\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.488742 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-combined-ca-bundle\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.488807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.502586 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9749b6cff-95bk5"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.590913 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data-custom\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.590963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptkfl\" (UniqueName: \"kubernetes.io/projected/5c223cd7-3eba-4f11-8f82-5ef5a479daee-kube-api-access-ptkfl\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.591178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-combined-ca-bundle\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.591265 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.602566 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-combined-ca-bundle\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.618184 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data-custom\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.619182 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.646438 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptkfl\" (UniqueName: \"kubernetes.io/projected/5c223cd7-3eba-4f11-8f82-5ef5a479daee-kube-api-access-ptkfl\") pod \"heat-engine-9749b6cff-95bk5\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.758415 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-646fd54c56-p5pvk"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.759922 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.763918 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.787815 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-glqtc"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.790529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.805713 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.806300 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz62z\" (UniqueName: \"kubernetes.io/projected/6533ed93-928f-47ce-8af0-419e5eb7c51c-kube-api-access-cz62z\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.806354 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data-custom\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.806466 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-combined-ca-bundle\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.806555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.837679 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-646fd54c56-p5pvk"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.873190 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-glqtc"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908051 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908098 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908139 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908166 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-config\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908200 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6klj\" (UniqueName: \"kubernetes.io/projected/afc5e6eb-f81c-447a-a720-4304943b3451-kube-api-access-g6klj\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908227 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908310 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz62z\" (UniqueName: \"kubernetes.io/projected/6533ed93-928f-47ce-8af0-419e5eb7c51c-kube-api-access-cz62z\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908331 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data-custom\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.908411 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-combined-ca-bundle\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.913314 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-66c8dc58c7-9t7wf"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.915669 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.918092 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-combined-ca-bundle\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.919784 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.920936 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.931299 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data-custom\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.959576 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66c8dc58c7-9t7wf"] Feb 19 19:42:41 crc kubenswrapper[4787]: I0219 19:42:41.964274 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz62z\" (UniqueName: \"kubernetes.io/projected/6533ed93-928f-47ce-8af0-419e5eb7c51c-kube-api-access-cz62z\") pod \"heat-cfnapi-646fd54c56-p5pvk\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.018762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019028 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-config\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019464 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6klj\" (UniqueName: \"kubernetes.io/projected/afc5e6eb-f81c-447a-a720-4304943b3451-kube-api-access-g6klj\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019591 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019705 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-combined-ca-bundle\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.019909 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.020108 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data-custom\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.020294 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s2nr\" (UniqueName: \"kubernetes.io/projected/b1a43e2d-0ef2-4f15-b070-a99c13b82909-kube-api-access-8s2nr\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.020131 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.021824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-config\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.022174 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.022178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.023192 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.043298 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6klj\" (UniqueName: \"kubernetes.io/projected/afc5e6eb-f81c-447a-a720-4304943b3451-kube-api-access-g6klj\") pod \"dnsmasq-dns-7756b9d78c-glqtc\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.088282 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.130107 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s2nr\" (UniqueName: \"kubernetes.io/projected/b1a43e2d-0ef2-4f15-b070-a99c13b82909-kube-api-access-8s2nr\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.130233 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.130334 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-combined-ca-bundle\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.130422 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data-custom\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.134915 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.136145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.139020 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-combined-ca-bundle\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.142944 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data-custom\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.160500 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s2nr\" (UniqueName: \"kubernetes.io/projected/b1a43e2d-0ef2-4f15-b070-a99c13b82909-kube-api-access-8s2nr\") pod \"heat-api-66c8dc58c7-9t7wf\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.379707 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rz4fd"] Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.381932 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.390241 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gcvzq" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.390449 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.390624 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.397344 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rz4fd"] Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.405398 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.450819 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c5tk\" (UniqueName: \"kubernetes.io/projected/18009b64-0e4a-438d-9a5e-7619312865aa-kube-api-access-2c5tk\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.453311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-config-data\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.453468 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-scripts\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.453715 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.531958 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9749b6cff-95bk5"] Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.555446 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.555518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c5tk\" (UniqueName: \"kubernetes.io/projected/18009b64-0e4a-438d-9a5e-7619312865aa-kube-api-access-2c5tk\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.555646 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-config-data\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.555694 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-scripts\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.567160 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.569421 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-config-data\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.571187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-scripts\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.577180 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c5tk\" (UniqueName: \"kubernetes.io/projected/18009b64-0e4a-438d-9a5e-7619312865aa-kube-api-access-2c5tk\") pod \"nova-cell0-conductor-db-sync-rz4fd\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.735512 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.771133 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-646fd54c56-p5pvk"] Feb 19 19:42:42 crc kubenswrapper[4787]: W0219 19:42:42.771769 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6533ed93_928f_47ce_8af0_419e5eb7c51c.slice/crio-3532e92e6cc5b194ca6bf96084f37966fb87455f5adc62e1c41a396ef1022d51 WatchSource:0}: Error finding container 3532e92e6cc5b194ca6bf96084f37966fb87455f5adc62e1c41a396ef1022d51: Status 404 returned error can't find the container with id 3532e92e6cc5b194ca6bf96084f37966fb87455f5adc62e1c41a396ef1022d51 Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.831693 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" event={"ID":"6533ed93-928f-47ce-8af0-419e5eb7c51c","Type":"ContainerStarted","Data":"3532e92e6cc5b194ca6bf96084f37966fb87455f5adc62e1c41a396ef1022d51"} Feb 19 19:42:42 crc kubenswrapper[4787]: I0219 19:42:42.851147 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9749b6cff-95bk5" event={"ID":"5c223cd7-3eba-4f11-8f82-5ef5a479daee","Type":"ContainerStarted","Data":"bcaa2c67faaf51e8978af9ddff3f3634796f5e9c4e5bec7966a1ecd87fd2d215"} Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.026410 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-glqtc"] Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.134138 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66c8dc58c7-9t7wf"] Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.399720 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rz4fd"] Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.869792 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9749b6cff-95bk5" event={"ID":"5c223cd7-3eba-4f11-8f82-5ef5a479daee","Type":"ContainerStarted","Data":"9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752"} Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.872280 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.876079 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" event={"ID":"18009b64-0e4a-438d-9a5e-7619312865aa","Type":"ContainerStarted","Data":"3bb951a0e8f4f5ba8429f32719f818eacb31a76bdc5b7ef07e51a14a0a67b2cb"} Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.894706 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-9749b6cff-95bk5" podStartSLOduration=2.894686819 podStartE2EDuration="2.894686819s" podCreationTimestamp="2026-02-19 19:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:43.891712555 +0000 UTC m=+1431.682378507" watchObservedRunningTime="2026-02-19 19:42:43.894686819 +0000 UTC m=+1431.685352751" Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.896780 4787 generic.go:334] "Generic (PLEG): container finished" podID="afc5e6eb-f81c-447a-a720-4304943b3451" containerID="817864a48f9313752448527f76e9bd5c36e80083233c2c77a85a09456880554f" exitCode=0 Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.896845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" event={"ID":"afc5e6eb-f81c-447a-a720-4304943b3451","Type":"ContainerDied","Data":"817864a48f9313752448527f76e9bd5c36e80083233c2c77a85a09456880554f"} Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.896870 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" event={"ID":"afc5e6eb-f81c-447a-a720-4304943b3451","Type":"ContainerStarted","Data":"606f35be3d0690365ecf4f54391518a077f7170646e0708086dc356c0213d299"} Feb 19 19:42:43 crc kubenswrapper[4787]: I0219 19:42:43.903935 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66c8dc58c7-9t7wf" event={"ID":"b1a43e2d-0ef2-4f15-b070-a99c13b82909","Type":"ContainerStarted","Data":"22cbc918d46678249fe9f319ead682ca2f1bfba122620e8950524a60841a5f1f"} Feb 19 19:42:44 crc kubenswrapper[4787]: I0219 19:42:44.965464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" event={"ID":"afc5e6eb-f81c-447a-a720-4304943b3451","Type":"ContainerStarted","Data":"456eb9b5f4b2db714f24d6dfecbcb833f45d387d35b609831910f177ac110e57"} Feb 19 19:42:44 crc kubenswrapper[4787]: I0219 19:42:44.965945 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:44 crc kubenswrapper[4787]: I0219 19:42:44.996978 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" podStartSLOduration=3.996960255 podStartE2EDuration="3.996960255s" podCreationTimestamp="2026-02-19 19:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:44.99540523 +0000 UTC m=+1432.786071172" watchObservedRunningTime="2026-02-19 19:42:44.996960255 +0000 UTC m=+1432.787626197" Feb 19 19:42:46 crc kubenswrapper[4787]: I0219 19:42:46.996939 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66c8dc58c7-9t7wf" event={"ID":"b1a43e2d-0ef2-4f15-b070-a99c13b82909","Type":"ContainerStarted","Data":"316f748abb9cd2c9d046d36021373b0b4f50ff202826fd07ad9f33691ff41a5e"} Feb 19 19:42:46 crc kubenswrapper[4787]: I0219 19:42:46.997577 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:47 crc kubenswrapper[4787]: I0219 19:42:47.002030 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" event={"ID":"6533ed93-928f-47ce-8af0-419e5eb7c51c","Type":"ContainerStarted","Data":"44b8574a2c98ae341ceae471d1a929df27b4e10968425bcffe824aae3aafc03f"} Feb 19 19:42:47 crc kubenswrapper[4787]: I0219 19:42:47.002974 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:47 crc kubenswrapper[4787]: I0219 19:42:47.019806 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-66c8dc58c7-9t7wf" podStartSLOduration=2.970718497 podStartE2EDuration="6.019778854s" podCreationTimestamp="2026-02-19 19:42:41 +0000 UTC" firstStartedPulling="2026-02-19 19:42:43.142005551 +0000 UTC m=+1430.932671493" lastFinishedPulling="2026-02-19 19:42:46.191065908 +0000 UTC m=+1433.981731850" observedRunningTime="2026-02-19 19:42:47.016114409 +0000 UTC m=+1434.806780351" watchObservedRunningTime="2026-02-19 19:42:47.019778854 +0000 UTC m=+1434.810444816" Feb 19 19:42:47 crc kubenswrapper[4787]: I0219 19:42:47.046144 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" podStartSLOduration=2.631804635 podStartE2EDuration="6.046121015s" podCreationTimestamp="2026-02-19 19:42:41 +0000 UTC" firstStartedPulling="2026-02-19 19:42:42.774768672 +0000 UTC m=+1430.565434614" lastFinishedPulling="2026-02-19 19:42:46.189085052 +0000 UTC m=+1433.979750994" observedRunningTime="2026-02-19 19:42:47.038765425 +0000 UTC m=+1434.829431367" watchObservedRunningTime="2026-02-19 19:42:47.046121015 +0000 UTC m=+1434.836786957" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.404930 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b644788db-6g25v"] Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.409897 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.422943 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-f4987686d-bgsdk"] Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.424724 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.427626 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfc6\" (UniqueName: \"kubernetes.io/projected/10c7b3fe-a75d-45bd-ba12-9a801a77798e-kube-api-access-qhfc6\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.427805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data-custom\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.427876 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-combined-ca-bundle\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.427912 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.440409 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b644788db-6g25v"] Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.461136 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f4987686d-bgsdk"] Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.499420 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66f69cc4bc-nvshc"] Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.501468 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.512503 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66f69cc4bc-nvshc"] Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529595 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-combined-ca-bundle\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529666 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529688 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-combined-ca-bundle\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529710 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529741 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data-custom\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529774 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529791 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bcl\" (UniqueName: \"kubernetes.io/projected/80787ea1-f265-44c5-a36b-e644b4472493-kube-api-access-l8bcl\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529808 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfc6\" (UniqueName: \"kubernetes.io/projected/10c7b3fe-a75d-45bd-ba12-9a801a77798e-kube-api-access-qhfc6\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529867 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-combined-ca-bundle\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data-custom\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.529955 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pg4k\" (UniqueName: \"kubernetes.io/projected/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-kube-api-access-7pg4k\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.530011 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data-custom\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.538264 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data-custom\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.539757 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-combined-ca-bundle\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.551414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.552004 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfc6\" (UniqueName: \"kubernetes.io/projected/10c7b3fe-a75d-45bd-ba12-9a801a77798e-kube-api-access-qhfc6\") pod \"heat-engine-5b644788db-6g25v\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.631768 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632128 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-combined-ca-bundle\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632173 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data-custom\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632250 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bcl\" (UniqueName: \"kubernetes.io/projected/80787ea1-f265-44c5-a36b-e644b4472493-kube-api-access-l8bcl\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632327 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-combined-ca-bundle\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632387 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data-custom\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.632452 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pg4k\" (UniqueName: \"kubernetes.io/projected/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-kube-api-access-7pg4k\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.636445 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.649174 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data-custom\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.651309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.653345 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-combined-ca-bundle\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.653951 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-combined-ca-bundle\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.654601 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data-custom\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.657275 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pg4k\" (UniqueName: \"kubernetes.io/projected/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-kube-api-access-7pg4k\") pod \"heat-api-f4987686d-bgsdk\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.660225 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bcl\" (UniqueName: \"kubernetes.io/projected/80787ea1-f265-44c5-a36b-e644b4472493-kube-api-access-l8bcl\") pod \"heat-cfnapi-66f69cc4bc-nvshc\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.747641 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.750501 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.768814 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.872099 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:42:48 crc kubenswrapper[4787]: I0219 19:42:48.968281 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:42:49 crc kubenswrapper[4787]: I0219 19:42:49.119465 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6k49"] Feb 19 19:42:50 crc kubenswrapper[4787]: I0219 19:42:50.042259 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6k49" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" containerID="cri-o://5979d1a1099c4a81c967c497327717b7b09b5c839e7246d48b1eac97d1b029ab" gracePeriod=2 Feb 19 19:42:50 crc kubenswrapper[4787]: I0219 19:42:50.948029 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66c8dc58c7-9t7wf"] Feb 19 19:42:50 crc kubenswrapper[4787]: I0219 19:42:50.948561 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-66c8dc58c7-9t7wf" podUID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" containerName="heat-api" containerID="cri-o://316f748abb9cd2c9d046d36021373b0b4f50ff202826fd07ad9f33691ff41a5e" gracePeriod=60 Feb 19 19:42:50 crc kubenswrapper[4787]: I0219 19:42:50.981126 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-646fd54c56-p5pvk"] Feb 19 19:42:50 crc kubenswrapper[4787]: I0219 19:42:50.981364 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" podUID="6533ed93-928f-47ce-8af0-419e5eb7c51c" containerName="heat-cfnapi" containerID="cri-o://44b8574a2c98ae341ceae471d1a929df27b4e10968425bcffe824aae3aafc03f" gracePeriod=60 Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.066263 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c8d9fd94c-f2vvf"] Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.069052 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.073213 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.074200 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.110708 4787 generic.go:334] "Generic (PLEG): container finished" podID="473362dd-40b3-4044-afee-350482d8d71d" containerID="ef05c65af8b89c300162675ad69f1b96fcfb66e2b584a833182454318e094a50" exitCode=0 Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.110769 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerDied","Data":"ef05c65af8b89c300162675ad69f1b96fcfb66e2b584a833182454318e094a50"} Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.114349 4787 generic.go:334] "Generic (PLEG): container finished" podID="b341cddb-4e14-4928-af2b-18b902d1999c" containerID="5979d1a1099c4a81c967c497327717b7b09b5c839e7246d48b1eac97d1b029ab" exitCode=0 Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.114544 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerDied","Data":"5979d1a1099c4a81c967c497327717b7b09b5c839e7246d48b1eac97d1b029ab"} Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.118141 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c8d9fd94c-f2vvf"] Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.131647 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5b5b7b88f4-48559"] Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.133134 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.136156 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.136307 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.147117 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b5b7b88f4-48559"] Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233299 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng2x6\" (UniqueName: \"kubernetes.io/projected/a10056f0-3bd6-4c6b-891b-b671799f5d9d-kube-api-access-ng2x6\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233384 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-internal-tls-certs\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233466 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-public-tls-certs\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233848 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data-custom\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233905 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t95n8\" (UniqueName: \"kubernetes.io/projected/c965959c-8adc-4c6e-a275-d05e8a3b7223-kube-api-access-t95n8\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233956 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-combined-ca-bundle\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.233990 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data-custom\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.234024 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-public-tls-certs\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.234089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-internal-tls-certs\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.234153 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.234208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-combined-ca-bundle\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.336326 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data-custom\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337164 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t95n8\" (UniqueName: \"kubernetes.io/projected/c965959c-8adc-4c6e-a275-d05e8a3b7223-kube-api-access-t95n8\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337202 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-combined-ca-bundle\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337230 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data-custom\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-public-tls-certs\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337296 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-internal-tls-certs\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337342 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337381 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-combined-ca-bundle\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng2x6\" (UniqueName: \"kubernetes.io/projected/a10056f0-3bd6-4c6b-891b-b671799f5d9d-kube-api-access-ng2x6\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-internal-tls-certs\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337623 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.337658 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-public-tls-certs\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.349250 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data-custom\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.351039 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data-custom\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.351281 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-combined-ca-bundle\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.351805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.352285 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.353440 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-internal-tls-certs\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.355501 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-combined-ca-bundle\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.355885 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-public-tls-certs\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.357379 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng2x6\" (UniqueName: \"kubernetes.io/projected/a10056f0-3bd6-4c6b-891b-b671799f5d9d-kube-api-access-ng2x6\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.358814 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t95n8\" (UniqueName: \"kubernetes.io/projected/c965959c-8adc-4c6e-a275-d05e8a3b7223-kube-api-access-t95n8\") pod \"heat-cfnapi-5b5b7b88f4-48559\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.359319 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-public-tls-certs\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.360370 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-internal-tls-certs\") pod \"heat-api-6c8d9fd94c-f2vvf\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.439354 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:51 crc kubenswrapper[4787]: I0219 19:42:51.457191 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.094524 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" podUID="6533ed93-928f-47ce-8af0-419e5eb7c51c" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.128967 4787 generic.go:334] "Generic (PLEG): container finished" podID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" containerID="316f748abb9cd2c9d046d36021373b0b4f50ff202826fd07ad9f33691ff41a5e" exitCode=0 Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.129042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66c8dc58c7-9t7wf" event={"ID":"b1a43e2d-0ef2-4f15-b070-a99c13b82909","Type":"ContainerDied","Data":"316f748abb9cd2c9d046d36021373b0b4f50ff202826fd07ad9f33691ff41a5e"} Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.132319 4787 generic.go:334] "Generic (PLEG): container finished" podID="6533ed93-928f-47ce-8af0-419e5eb7c51c" containerID="44b8574a2c98ae341ceae471d1a929df27b4e10968425bcffe824aae3aafc03f" exitCode=0 Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.132357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" event={"ID":"6533ed93-928f-47ce-8af0-419e5eb7c51c","Type":"ContainerDied","Data":"44b8574a2c98ae341ceae471d1a929df27b4e10968425bcffe824aae3aafc03f"} Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.136702 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.206490 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pcz54"] Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.206796 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerName="dnsmasq-dns" containerID="cri-o://3e767471a7e41ea35ca98aa52aa978c8979e879e75ecf04adb8f351dd6e50b50" gracePeriod=10 Feb 19 19:42:52 crc kubenswrapper[4787]: I0219 19:42:52.408011 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-66c8dc58c7-9t7wf" podUID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Feb 19 19:42:53 crc kubenswrapper[4787]: I0219 19:42:53.146520 4787 generic.go:334] "Generic (PLEG): container finished" podID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerID="3e767471a7e41ea35ca98aa52aa978c8979e879e75ecf04adb8f351dd6e50b50" exitCode=0 Feb 19 19:42:53 crc kubenswrapper[4787]: I0219 19:42:53.146623 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" event={"ID":"3c04f822-c3ee-4f22-b9a7-1ad34dca0220","Type":"ContainerDied","Data":"3e767471a7e41ea35ca98aa52aa978c8979e879e75ecf04adb8f351dd6e50b50"} Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.139115 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.177902 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data\") pod \"6533ed93-928f-47ce-8af0-419e5eb7c51c\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.178266 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data-custom\") pod \"6533ed93-928f-47ce-8af0-419e5eb7c51c\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.178326 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-combined-ca-bundle\") pod \"6533ed93-928f-47ce-8af0-419e5eb7c51c\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.178456 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz62z\" (UniqueName: \"kubernetes.io/projected/6533ed93-928f-47ce-8af0-419e5eb7c51c-kube-api-access-cz62z\") pod \"6533ed93-928f-47ce-8af0-419e5eb7c51c\" (UID: \"6533ed93-928f-47ce-8af0-419e5eb7c51c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.199031 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6533ed93-928f-47ce-8af0-419e5eb7c51c-kube-api-access-cz62z" (OuterVolumeSpecName: "kube-api-access-cz62z") pod "6533ed93-928f-47ce-8af0-419e5eb7c51c" (UID: "6533ed93-928f-47ce-8af0-419e5eb7c51c"). InnerVolumeSpecName "kube-api-access-cz62z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.214855 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6533ed93-928f-47ce-8af0-419e5eb7c51c" (UID: "6533ed93-928f-47ce-8af0-419e5eb7c51c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.247527 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b644788db-6g25v"] Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.259005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6k49" event={"ID":"b341cddb-4e14-4928-af2b-18b902d1999c","Type":"ContainerDied","Data":"e26752663bc11dce1b1d2a46d6e763f7163eaacb9354aa82a4199d264509077d"} Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.259054 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e26752663bc11dce1b1d2a46d6e763f7163eaacb9354aa82a4199d264509077d" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.268346 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"473362dd-40b3-4044-afee-350482d8d71d","Type":"ContainerDied","Data":"6aa7928a27675db4f8471121da75d99cca35a1dba4105d5703563ffed59de829"} Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.268387 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa7928a27675db4f8471121da75d99cca35a1dba4105d5703563ffed59de829" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.277109 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" event={"ID":"6533ed93-928f-47ce-8af0-419e5eb7c51c","Type":"ContainerDied","Data":"3532e92e6cc5b194ca6bf96084f37966fb87455f5adc62e1c41a396ef1022d51"} Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.277157 4787 scope.go:117] "RemoveContainer" containerID="44b8574a2c98ae341ceae471d1a929df27b4e10968425bcffe824aae3aafc03f" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.277287 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-646fd54c56-p5pvk" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.281548 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz62z\" (UniqueName: \"kubernetes.io/projected/6533ed93-928f-47ce-8af0-419e5eb7c51c-kube-api-access-cz62z\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.281569 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.290784 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6533ed93-928f-47ce-8af0-419e5eb7c51c" (UID: "6533ed93-928f-47ce-8af0-419e5eb7c51c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.318176 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" event={"ID":"3c04f822-c3ee-4f22-b9a7-1ad34dca0220","Type":"ContainerDied","Data":"c8df70089f8578408d499bf6a13ce1a4bdc2e1a094f352de903378081bbb591f"} Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.318216 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8df70089f8578408d499bf6a13ce1a4bdc2e1a094f352de903378081bbb591f" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.319914 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data" (OuterVolumeSpecName: "config-data") pod "6533ed93-928f-47ce-8af0-419e5eb7c51c" (UID: "6533ed93-928f-47ce-8af0-419e5eb7c51c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.323004 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66c8dc58c7-9t7wf" event={"ID":"b1a43e2d-0ef2-4f15-b070-a99c13b82909","Type":"ContainerDied","Data":"22cbc918d46678249fe9f319ead682ca2f1bfba122620e8950524a60841a5f1f"} Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.323039 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22cbc918d46678249fe9f319ead682ca2f1bfba122620e8950524a60841a5f1f" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.387919 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.387951 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6533ed93-928f-47ce-8af0-419e5eb7c51c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.401069 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.450166 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.449598 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.454792 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.490034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-swift-storage-0\") pod \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.490382 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz8ws\" (UniqueName: \"kubernetes.io/projected/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-kube-api-access-jz8ws\") pod \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.490561 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-nb\") pod \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.490651 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-sb\") pod \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.490679 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-svc\") pod \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.490715 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-config\") pod \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\" (UID: \"3c04f822-c3ee-4f22-b9a7-1ad34dca0220\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.522559 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-kube-api-access-jz8ws" (OuterVolumeSpecName: "kube-api-access-jz8ws") pod "3c04f822-c3ee-4f22-b9a7-1ad34dca0220" (UID: "3c04f822-c3ee-4f22-b9a7-1ad34dca0220"). InnerVolumeSpecName "kube-api-access-jz8ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601205 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-combined-ca-bundle\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601288 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-catalog-content\") pod \"b341cddb-4e14-4928-af2b-18b902d1999c\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601338 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-utilities\") pod \"b341cddb-4e14-4928-af2b-18b902d1999c\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601403 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzh5g\" (UniqueName: \"kubernetes.io/projected/b341cddb-4e14-4928-af2b-18b902d1999c-kube-api-access-tzh5g\") pod \"b341cddb-4e14-4928-af2b-18b902d1999c\" (UID: \"b341cddb-4e14-4928-af2b-18b902d1999c\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-sg-core-conf-yaml\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601588 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-config-data\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601626 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-combined-ca-bundle\") pod \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601648 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data\") pod \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601669 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-run-httpd\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601692 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d9zn\" (UniqueName: \"kubernetes.io/projected/473362dd-40b3-4044-afee-350482d8d71d-kube-api-access-9d9zn\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601762 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s2nr\" (UniqueName: \"kubernetes.io/projected/b1a43e2d-0ef2-4f15-b070-a99c13b82909-kube-api-access-8s2nr\") pod \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601798 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-scripts\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601863 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data-custom\") pod \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\" (UID: \"b1a43e2d-0ef2-4f15-b070-a99c13b82909\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.601903 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-log-httpd\") pod \"473362dd-40b3-4044-afee-350482d8d71d\" (UID: \"473362dd-40b3-4044-afee-350482d8d71d\") " Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.602769 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz8ws\" (UniqueName: \"kubernetes.io/projected/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-kube-api-access-jz8ws\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.604381 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.607087 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-utilities" (OuterVolumeSpecName: "utilities") pod "b341cddb-4e14-4928-af2b-18b902d1999c" (UID: "b341cddb-4e14-4928-af2b-18b902d1999c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.607774 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.625658 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b341cddb-4e14-4928-af2b-18b902d1999c-kube-api-access-tzh5g" (OuterVolumeSpecName: "kube-api-access-tzh5g") pod "b341cddb-4e14-4928-af2b-18b902d1999c" (UID: "b341cddb-4e14-4928-af2b-18b902d1999c"). InnerVolumeSpecName "kube-api-access-tzh5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.655852 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-scripts" (OuterVolumeSpecName: "scripts") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.656900 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473362dd-40b3-4044-afee-350482d8d71d-kube-api-access-9d9zn" (OuterVolumeSpecName: "kube-api-access-9d9zn") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "kube-api-access-9d9zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.666446 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1a43e2d-0ef2-4f15-b070-a99c13b82909" (UID: "b1a43e2d-0ef2-4f15-b070-a99c13b82909"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.666688 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a43e2d-0ef2-4f15-b070-a99c13b82909-kube-api-access-8s2nr" (OuterVolumeSpecName: "kube-api-access-8s2nr") pod "b1a43e2d-0ef2-4f15-b070-a99c13b82909" (UID: "b1a43e2d-0ef2-4f15-b070-a99c13b82909"). InnerVolumeSpecName "kube-api-access-8s2nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.669980 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-646fd54c56-p5pvk"] Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.675268 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c04f822-c3ee-4f22-b9a7-1ad34dca0220" (UID: "3c04f822-c3ee-4f22-b9a7-1ad34dca0220"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.694170 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-646fd54c56-p5pvk"] Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.698485 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-config" (OuterVolumeSpecName: "config") pod "3c04f822-c3ee-4f22-b9a7-1ad34dca0220" (UID: "3c04f822-c3ee-4f22-b9a7-1ad34dca0220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705358 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzh5g\" (UniqueName: \"kubernetes.io/projected/b341cddb-4e14-4928-af2b-18b902d1999c-kube-api-access-tzh5g\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705390 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705399 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705409 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d9zn\" (UniqueName: \"kubernetes.io/projected/473362dd-40b3-4044-afee-350482d8d71d-kube-api-access-9d9zn\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705418 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s2nr\" (UniqueName: \"kubernetes.io/projected/b1a43e2d-0ef2-4f15-b070-a99c13b82909-kube-api-access-8s2nr\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705428 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705436 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705444 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705453 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/473362dd-40b3-4044-afee-350482d8d71d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.705461 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.789991 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b341cddb-4e14-4928-af2b-18b902d1999c" (UID: "b341cddb-4e14-4928-af2b-18b902d1999c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.808283 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b341cddb-4e14-4928-af2b-18b902d1999c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.872278 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c04f822-c3ee-4f22-b9a7-1ad34dca0220" (UID: "3c04f822-c3ee-4f22-b9a7-1ad34dca0220"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.885705 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1a43e2d-0ef2-4f15-b070-a99c13b82909" (UID: "b1a43e2d-0ef2-4f15-b070-a99c13b82909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.902044 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.911142 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.911378 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.911488 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.912752 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6533ed93-928f-47ce-8af0-419e5eb7c51c" path="/var/lib/kubelet/pods/6533ed93-928f-47ce-8af0-419e5eb7c51c/volumes" Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.944482 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data" (OuterVolumeSpecName: "config-data") pod "b1a43e2d-0ef2-4f15-b070-a99c13b82909" (UID: "b1a43e2d-0ef2-4f15-b070-a99c13b82909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:56 crc kubenswrapper[4787]: W0219 19:42:56.971332 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ea81b9_beb0_484c_ba3c_b7b1364d179f.slice/crio-322b3d58502c1c65cb93e271d3439c218cd95da8256b15f5f98b75bce7522f55 WatchSource:0}: Error finding container 322b3d58502c1c65cb93e271d3439c218cd95da8256b15f5f98b75bce7522f55: Status 404 returned error can't find the container with id 322b3d58502c1c65cb93e271d3439c218cd95da8256b15f5f98b75bce7522f55 Feb 19 19:42:56 crc kubenswrapper[4787]: I0219 19:42:56.994529 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c04f822-c3ee-4f22-b9a7-1ad34dca0220" (UID: "3c04f822-c3ee-4f22-b9a7-1ad34dca0220"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.016197 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.016232 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a43e2d-0ef2-4f15-b070-a99c13b82909-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.051476 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c04f822-c3ee-4f22-b9a7-1ad34dca0220" (UID: "3c04f822-c3ee-4f22-b9a7-1ad34dca0220"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.056349 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.093461 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-config-data" (OuterVolumeSpecName: "config-data") pod "473362dd-40b3-4044-afee-350482d8d71d" (UID: "473362dd-40b3-4044-afee-350482d8d71d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.118305 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.118332 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/473362dd-40b3-4044-afee-350482d8d71d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.118343 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c04f822-c3ee-4f22-b9a7-1ad34dca0220-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.258513 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66f69cc4bc-nvshc"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.258566 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f4987686d-bgsdk"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.258580 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5b5b7b88f4-48559"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.258592 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c8d9fd94c-f2vvf"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.337806 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8d9fd94c-f2vvf" event={"ID":"a10056f0-3bd6-4c6b-891b-b671799f5d9d","Type":"ContainerStarted","Data":"02a6a5370f35bd568a8a555f4a60c8747c61d823c83378c3d13a0ac82808a49f"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.339446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" event={"ID":"c965959c-8adc-4c6e-a275-d05e8a3b7223","Type":"ContainerStarted","Data":"91cb0aed6b3ab87da9532462413bbc4407e73cf3d7e5950500b77afd2bba0b56"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.342232 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" event={"ID":"80787ea1-f265-44c5-a36b-e644b4472493","Type":"ContainerStarted","Data":"770c6c47449743acb8f3f4076b778314a44e2f68ce819e8a0cdef2a61b25c910"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.343905 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" event={"ID":"18009b64-0e4a-438d-9a5e-7619312865aa","Type":"ContainerStarted","Data":"26b99dd13fb552e884facb81083ff2266f2962a36b1f79e5ae8d6ade0af05be2"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.345576 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f4987686d-bgsdk" event={"ID":"d3ea81b9-beb0-484c-ba3c-b7b1364d179f","Type":"ContainerStarted","Data":"322b3d58502c1c65cb93e271d3439c218cd95da8256b15f5f98b75bce7522f55"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.347660 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.347768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b644788db-6g25v" event={"ID":"10c7b3fe-a75d-45bd-ba12-9a801a77798e","Type":"ContainerStarted","Data":"3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.347797 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b644788db-6g25v" event={"ID":"10c7b3fe-a75d-45bd-ba12-9a801a77798e","Type":"ContainerStarted","Data":"021abd964a69c2aa3eaf95f56a2bfab02e148821fe93ed4bde1ae8924f2e5556"} Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.348064 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66c8dc58c7-9t7wf" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.348285 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6k49" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.348996 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pcz54" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.383767 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" podStartSLOduration=2.727561014 podStartE2EDuration="15.383745294s" podCreationTimestamp="2026-02-19 19:42:42 +0000 UTC" firstStartedPulling="2026-02-19 19:42:43.403598829 +0000 UTC m=+1431.194264771" lastFinishedPulling="2026-02-19 19:42:56.059783109 +0000 UTC m=+1443.850449051" observedRunningTime="2026-02-19 19:42:57.373275046 +0000 UTC m=+1445.163940988" watchObservedRunningTime="2026-02-19 19:42:57.383745294 +0000 UTC m=+1445.174411236" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.410529 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b644788db-6g25v" podStartSLOduration=9.410505757 podStartE2EDuration="9.410505757s" podCreationTimestamp="2026-02-19 19:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:57.397791475 +0000 UTC m=+1445.188457437" watchObservedRunningTime="2026-02-19 19:42:57.410505757 +0000 UTC m=+1445.201171699" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.458063 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pcz54"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.492194 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pcz54"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.503238 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6k49"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.517684 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6k49"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.525731 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.540667 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.551489 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552131 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-central-agent" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552149 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-central-agent" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552167 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerName="init" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552172 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerName="init" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552189 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerName="dnsmasq-dns" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552195 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerName="dnsmasq-dns" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552207 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="extract-utilities" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552213 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="extract-utilities" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552226 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="proxy-httpd" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552232 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="proxy-httpd" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552243 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6533ed93-928f-47ce-8af0-419e5eb7c51c" containerName="heat-cfnapi" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552248 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6533ed93-928f-47ce-8af0-419e5eb7c51c" containerName="heat-cfnapi" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552259 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-notification-agent" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552265 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-notification-agent" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552272 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" containerName="heat-api" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552277 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" containerName="heat-api" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552296 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552303 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552314 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="extract-content" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552320 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="extract-content" Feb 19 19:42:57 crc kubenswrapper[4787]: E0219 19:42:57.552329 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="sg-core" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552336 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="sg-core" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552524 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" containerName="heat-api" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552535 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6533ed93-928f-47ce-8af0-419e5eb7c51c" containerName="heat-cfnapi" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552543 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="sg-core" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552555 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="proxy-httpd" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552567 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" containerName="dnsmasq-dns" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552576 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-notification-agent" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552590 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="473362dd-40b3-4044-afee-350482d8d71d" containerName="ceilometer-central-agent" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.552623 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" containerName="registry-server" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.555004 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.558750 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.558944 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.563543 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-66c8dc58c7-9t7wf"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.582591 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-66c8dc58c7-9t7wf"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.599000 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637187 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-config-data\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637242 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5n7\" (UniqueName: \"kubernetes.io/projected/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-kube-api-access-rf5n7\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637270 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637289 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-run-httpd\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637303 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-log-httpd\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637347 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-scripts\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.637431 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740019 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-config-data\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740076 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf5n7\" (UniqueName: \"kubernetes.io/projected/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-kube-api-access-rf5n7\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740107 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740124 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-run-httpd\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740136 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-log-httpd\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740183 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-scripts\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740269 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.740907 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-run-httpd\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.741443 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-log-httpd\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.746835 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.747221 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.749472 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-scripts\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.750452 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-config-data\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.758993 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf5n7\" (UniqueName: \"kubernetes.io/projected/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-kube-api-access-rf5n7\") pod \"ceilometer-0\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " pod="openstack/ceilometer-0" Feb 19 19:42:57 crc kubenswrapper[4787]: I0219 19:42:57.881274 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.360341 4787 generic.go:334] "Generic (PLEG): container finished" podID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerID="4ffb03083e2655a66451c80367b3d8eae1b4da92035916803fae0d87d558a760" exitCode=1 Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.360440 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f4987686d-bgsdk" event={"ID":"d3ea81b9-beb0-484c-ba3c-b7b1364d179f","Type":"ContainerDied","Data":"4ffb03083e2655a66451c80367b3d8eae1b4da92035916803fae0d87d558a760"} Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.361227 4787 scope.go:117] "RemoveContainer" containerID="4ffb03083e2655a66451c80367b3d8eae1b4da92035916803fae0d87d558a760" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.363234 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" event={"ID":"c965959c-8adc-4c6e-a275-d05e8a3b7223","Type":"ContainerStarted","Data":"b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9"} Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.363384 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.365192 4787 generic.go:334] "Generic (PLEG): container finished" podID="80787ea1-f265-44c5-a36b-e644b4472493" containerID="329d30838f99db3e031b0205532a5b52fc7c43dcc776ed6a3fdc2ef10456b60b" exitCode=1 Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.365272 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" event={"ID":"80787ea1-f265-44c5-a36b-e644b4472493","Type":"ContainerDied","Data":"329d30838f99db3e031b0205532a5b52fc7c43dcc776ed6a3fdc2ef10456b60b"} Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.365579 4787 scope.go:117] "RemoveContainer" containerID="329d30838f99db3e031b0205532a5b52fc7c43dcc776ed6a3fdc2ef10456b60b" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.367310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8d9fd94c-f2vvf" event={"ID":"a10056f0-3bd6-4c6b-891b-b671799f5d9d","Type":"ContainerStarted","Data":"03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2"} Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.367665 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.431329 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c8d9fd94c-f2vvf" podStartSLOduration=8.43130868 podStartE2EDuration="8.43130868s" podCreationTimestamp="2026-02-19 19:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:58.4183429 +0000 UTC m=+1446.209008842" watchObservedRunningTime="2026-02-19 19:42:58.43130868 +0000 UTC m=+1446.221974622" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.468963 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" podStartSLOduration=8.468945393 podStartE2EDuration="8.468945393s" podCreationTimestamp="2026-02-19 19:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:42:58.453006728 +0000 UTC m=+1446.243672670" watchObservedRunningTime="2026-02-19 19:42:58.468945393 +0000 UTC m=+1446.259611325" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.486629 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.667921 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.668146 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-log" containerID="cri-o://4b46856917ee73ddd229286822ffc328d13b1c26602c1a4f7d7d7151a00ea032" gracePeriod=30 Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.668270 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-httpd" containerID="cri-o://552a698ec62fc6a022b1253b3ca460481ea8903094fbbc878ec9dcdbaecd8be2" gracePeriod=30 Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.747953 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.747999 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.769092 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.769131 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.905921 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c04f822-c3ee-4f22-b9a7-1ad34dca0220" path="/var/lib/kubelet/pods/3c04f822-c3ee-4f22-b9a7-1ad34dca0220/volumes" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.906803 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="473362dd-40b3-4044-afee-350482d8d71d" path="/var/lib/kubelet/pods/473362dd-40b3-4044-afee-350482d8d71d/volumes" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.907752 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a43e2d-0ef2-4f15-b070-a99c13b82909" path="/var/lib/kubelet/pods/b1a43e2d-0ef2-4f15-b070-a99c13b82909/volumes" Feb 19 19:42:58 crc kubenswrapper[4787]: I0219 19:42:58.908741 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b341cddb-4e14-4928-af2b-18b902d1999c" path="/var/lib/kubelet/pods/b341cddb-4e14-4928-af2b-18b902d1999c/volumes" Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.379639 4787 generic.go:334] "Generic (PLEG): container finished" podID="2440e80f-a370-4420-866d-7059aa04e7b3" containerID="4b46856917ee73ddd229286822ffc328d13b1c26602c1a4f7d7d7151a00ea032" exitCode=143 Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.379711 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2440e80f-a370-4420-866d-7059aa04e7b3","Type":"ContainerDied","Data":"4b46856917ee73ddd229286822ffc328d13b1c26602c1a4f7d7d7151a00ea032"} Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.381664 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerStarted","Data":"4f9a9f512d47197b2f008f641747df5601b92869f42a496c5cf4234a31a12425"} Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.383894 4787 generic.go:334] "Generic (PLEG): container finished" podID="80787ea1-f265-44c5-a36b-e644b4472493" containerID="bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c" exitCode=1 Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.383932 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" event={"ID":"80787ea1-f265-44c5-a36b-e644b4472493","Type":"ContainerDied","Data":"bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c"} Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.383979 4787 scope.go:117] "RemoveContainer" containerID="329d30838f99db3e031b0205532a5b52fc7c43dcc776ed6a3fdc2ef10456b60b" Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.384795 4787 scope.go:117] "RemoveContainer" containerID="bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c" Feb 19 19:42:59 crc kubenswrapper[4787]: E0219 19:42:59.385192 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66f69cc4bc-nvshc_openstack(80787ea1-f265-44c5-a36b-e644b4472493)\"" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" podUID="80787ea1-f265-44c5-a36b-e644b4472493" Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.385737 4787 generic.go:334] "Generic (PLEG): container finished" podID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerID="89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3" exitCode=1 Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.385998 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f4987686d-bgsdk" event={"ID":"d3ea81b9-beb0-484c-ba3c-b7b1364d179f","Type":"ContainerDied","Data":"89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3"} Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.386431 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.386561 4787 scope.go:117] "RemoveContainer" containerID="89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3" Feb 19 19:42:59 crc kubenswrapper[4787]: E0219 19:42:59.386807 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f4987686d-bgsdk_openstack(d3ea81b9-beb0-484c-ba3c-b7b1364d179f)\"" pod="openstack/heat-api-f4987686d-bgsdk" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" Feb 19 19:42:59 crc kubenswrapper[4787]: I0219 19:42:59.499969 4787 scope.go:117] "RemoveContainer" containerID="4ffb03083e2655a66451c80367b3d8eae1b4da92035916803fae0d87d558a760" Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.118227 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.118725 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-log" containerID="cri-o://1805ac619773501a73de490092b5050dc1a1af5fc129fdb4ecc00e28689a3f0f" gracePeriod=30 Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.118852 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-httpd" containerID="cri-o://0276fba5db63f2a08f74ce737f460ef19cde912a4ab2d72f6d83e2d3fe1937f7" gracePeriod=30 Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.407695 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerStarted","Data":"213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89"} Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.408122 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerStarted","Data":"86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375"} Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.416275 4787 scope.go:117] "RemoveContainer" containerID="bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c" Feb 19 19:43:00 crc kubenswrapper[4787]: E0219 19:43:00.418658 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66f69cc4bc-nvshc_openstack(80787ea1-f265-44c5-a36b-e644b4472493)\"" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" podUID="80787ea1-f265-44c5-a36b-e644b4472493" Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.424745 4787 scope.go:117] "RemoveContainer" containerID="89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3" Feb 19 19:43:00 crc kubenswrapper[4787]: E0219 19:43:00.425688 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f4987686d-bgsdk_openstack(d3ea81b9-beb0-484c-ba3c-b7b1364d179f)\"" pod="openstack/heat-api-f4987686d-bgsdk" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.440042 4787 generic.go:334] "Generic (PLEG): container finished" podID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerID="1805ac619773501a73de490092b5050dc1a1af5fc129fdb4ecc00e28689a3f0f" exitCode=143 Feb 19 19:43:00 crc kubenswrapper[4787]: I0219 19:43:00.440650 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2039e8f8-5048-488a-bc3a-82a3d4389943","Type":"ContainerDied","Data":"1805ac619773501a73de490092b5050dc1a1af5fc129fdb4ecc00e28689a3f0f"} Feb 19 19:43:01 crc kubenswrapper[4787]: I0219 19:43:01.285716 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:43:01 crc kubenswrapper[4787]: I0219 19:43:01.848224 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:43:02 crc kubenswrapper[4787]: I0219 19:43:02.597128 4787 generic.go:334] "Generic (PLEG): container finished" podID="2440e80f-a370-4420-866d-7059aa04e7b3" containerID="552a698ec62fc6a022b1253b3ca460481ea8903094fbbc878ec9dcdbaecd8be2" exitCode=0 Feb 19 19:43:02 crc kubenswrapper[4787]: I0219 19:43:02.597713 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2440e80f-a370-4420-866d-7059aa04e7b3","Type":"ContainerDied","Data":"552a698ec62fc6a022b1253b3ca460481ea8903094fbbc878ec9dcdbaecd8be2"} Feb 19 19:43:02 crc kubenswrapper[4787]: I0219 19:43:02.649842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerStarted","Data":"addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c"} Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.018254 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.128944 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-httpd-run\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.129755 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.129795 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-combined-ca-bundle\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.129972 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-public-tls-certs\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.130030 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-config-data\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.130059 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4q7\" (UniqueName: \"kubernetes.io/projected/2440e80f-a370-4420-866d-7059aa04e7b3-kube-api-access-nb4q7\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.132880 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.174774 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-scripts\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.174921 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-logs\") pod \"2440e80f-a370-4420-866d-7059aa04e7b3\" (UID: \"2440e80f-a370-4420-866d-7059aa04e7b3\") " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.176332 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.177027 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-logs" (OuterVolumeSpecName: "logs") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.192056 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2440e80f-a370-4420-866d-7059aa04e7b3-kube-api-access-nb4q7" (OuterVolumeSpecName: "kube-api-access-nb4q7") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "kube-api-access-nb4q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.201109 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-scripts" (OuterVolumeSpecName: "scripts") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.205531 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b" (OuterVolumeSpecName: "glance") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.217651 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.243968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.278554 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") on node \"crc\" " Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.278600 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.278631 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.278644 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4q7\" (UniqueName: \"kubernetes.io/projected/2440e80f-a370-4420-866d-7059aa04e7b3-kube-api-access-nb4q7\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.278657 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.278669 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2440e80f-a370-4420-866d-7059aa04e7b3-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.316933 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-config-data" (OuterVolumeSpecName: "config-data") pod "2440e80f-a370-4420-866d-7059aa04e7b3" (UID: "2440e80f-a370-4420-866d-7059aa04e7b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.345458 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.345670 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b") on node "crc" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.382855 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.382897 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2440e80f-a370-4420-866d-7059aa04e7b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.678015 4787 generic.go:334] "Generic (PLEG): container finished" podID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerID="0276fba5db63f2a08f74ce737f460ef19cde912a4ab2d72f6d83e2d3fe1937f7" exitCode=0 Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.678081 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2039e8f8-5048-488a-bc3a-82a3d4389943","Type":"ContainerDied","Data":"0276fba5db63f2a08f74ce737f460ef19cde912a4ab2d72f6d83e2d3fe1937f7"} Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.683178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2440e80f-a370-4420-866d-7059aa04e7b3","Type":"ContainerDied","Data":"9b7fd0c7f26afb9a2db21f831fe69c7d6246fce4b9de03987ec43383360b1269"} Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.683225 4787 scope.go:117] "RemoveContainer" containerID="552a698ec62fc6a022b1253b3ca460481ea8903094fbbc878ec9dcdbaecd8be2" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.683393 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.748005 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.749361 4787 scope.go:117] "RemoveContainer" containerID="bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c" Feb 19 19:43:03 crc kubenswrapper[4787]: E0219 19:43:03.749815 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66f69cc4bc-nvshc_openstack(80787ea1-f265-44c5-a36b-e644b4472493)\"" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" podUID="80787ea1-f265-44c5-a36b-e644b4472493" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.750180 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.771075 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.772072 4787 scope.go:117] "RemoveContainer" containerID="89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3" Feb 19 19:43:03 crc kubenswrapper[4787]: E0219 19:43:03.772384 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f4987686d-bgsdk_openstack(d3ea81b9-beb0-484c-ba3c-b7b1364d179f)\"" pod="openstack/heat-api-f4987686d-bgsdk" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.772686 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.799589 4787 scope.go:117] "RemoveContainer" containerID="4b46856917ee73ddd229286822ffc328d13b1c26602c1a4f7d7d7151a00ea032" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.834713 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.852289 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.891526 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:43:03 crc kubenswrapper[4787]: E0219 19:43:03.892126 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-log" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.892151 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-log" Feb 19 19:43:03 crc kubenswrapper[4787]: E0219 19:43:03.892162 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-httpd" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.892169 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-httpd" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.892961 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-log" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.892978 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" containerName="glance-httpd" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.900159 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.906016 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.906233 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 19:43:03 crc kubenswrapper[4787]: I0219 19:43:03.987938 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.091209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.093435 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.093797 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.093925 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxkpb\" (UniqueName: \"kubernetes.io/projected/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-kube-api-access-qxkpb\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.094187 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.094428 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.094656 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.094916 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-logs\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197081 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-logs\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197418 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197520 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197584 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197618 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxkpb\" (UniqueName: \"kubernetes.io/projected/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-kube-api-access-qxkpb\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197634 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197652 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.197696 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.198039 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-logs\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.198451 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.213726 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.214430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-config-data\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.222505 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.251195 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-scripts\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.258401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxkpb\" (UniqueName: \"kubernetes.io/projected/e1a3964b-540d-4b05-a2ad-b39a87d44a3a-kube-api-access-qxkpb\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.306618 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.307068 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ed3d7c97b215637ba9b7943cd9135b129ba954894c5daafc1dc18f3b0039582/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.310039 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.402999 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-httpd-run\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.403186 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-scripts\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.403226 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-internal-tls-certs\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.403994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.404036 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zgfb\" (UniqueName: \"kubernetes.io/projected/2039e8f8-5048-488a-bc3a-82a3d4389943-kube-api-access-9zgfb\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.404107 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-logs\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.404199 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-config-data\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.404249 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-combined-ca-bundle\") pod \"2039e8f8-5048-488a-bc3a-82a3d4389943\" (UID: \"2039e8f8-5048-488a-bc3a-82a3d4389943\") " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.406501 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.410748 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-scripts" (OuterVolumeSpecName: "scripts") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.414251 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-logs" (OuterVolumeSpecName: "logs") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.429589 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2039e8f8-5048-488a-bc3a-82a3d4389943-kube-api-access-9zgfb" (OuterVolumeSpecName: "kube-api-access-9zgfb") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "kube-api-access-9zgfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.515922 4787 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.515951 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.515961 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zgfb\" (UniqueName: \"kubernetes.io/projected/2039e8f8-5048-488a-bc3a-82a3d4389943-kube-api-access-9zgfb\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.515972 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2039e8f8-5048-488a-bc3a-82a3d4389943-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.597960 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2" (OuterVolumeSpecName: "glance") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.611839 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-05b8ba3e-aba8-4473-9500-90f75ce5f38b\") pod \"glance-default-external-api-0\" (UID: \"e1a3964b-540d-4b05-a2ad-b39a87d44a3a\") " pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.618345 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") on node \"crc\" " Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.652822 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.682690 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.683099 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2") on node "crc" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.683489 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.710400 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-config-data" (OuterVolumeSpecName: "config-data") pod "2039e8f8-5048-488a-bc3a-82a3d4389943" (UID: "2039e8f8-5048-488a-bc3a-82a3d4389943"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.720280 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.720325 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.720343 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.720358 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2039e8f8-5048-488a-bc3a-82a3d4389943-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.721909 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2039e8f8-5048-488a-bc3a-82a3d4389943","Type":"ContainerDied","Data":"2b742fc62001b18d99622ecee22a331a553c71be6428700df09fc3355825a4df"} Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.721968 4787 scope.go:117] "RemoveContainer" containerID="0276fba5db63f2a08f74ce737f460ef19cde912a4ab2d72f6d83e2d3fe1937f7" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.721968 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.731950 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerStarted","Data":"8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1"} Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.732293 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.732569 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-central-agent" containerID="cri-o://86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375" gracePeriod=30 Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.733801 4787 scope.go:117] "RemoveContainer" containerID="89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.734208 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="proxy-httpd" containerID="cri-o://8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1" gracePeriod=30 Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.734313 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-notification-agent" containerID="cri-o://213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89" gracePeriod=30 Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.734402 4787 scope.go:117] "RemoveContainer" containerID="bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.734422 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="sg-core" containerID="cri-o://addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c" gracePeriod=30 Feb 19 19:43:04 crc kubenswrapper[4787]: E0219 19:43:04.734492 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-f4987686d-bgsdk_openstack(d3ea81b9-beb0-484c-ba3c-b7b1364d179f)\"" pod="openstack/heat-api-f4987686d-bgsdk" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" Feb 19 19:43:04 crc kubenswrapper[4787]: E0219 19:43:04.739378 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-66f69cc4bc-nvshc_openstack(80787ea1-f265-44c5-a36b-e644b4472493)\"" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" podUID="80787ea1-f265-44c5-a36b-e644b4472493" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.767568 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.339593073 podStartE2EDuration="7.767547702s" podCreationTimestamp="2026-02-19 19:42:57 +0000 UTC" firstStartedPulling="2026-02-19 19:42:58.500178443 +0000 UTC m=+1446.290844385" lastFinishedPulling="2026-02-19 19:43:03.928133072 +0000 UTC m=+1451.718799014" observedRunningTime="2026-02-19 19:43:04.760976675 +0000 UTC m=+1452.551642627" watchObservedRunningTime="2026-02-19 19:43:04.767547702 +0000 UTC m=+1452.558213644" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.795846 4787 scope.go:117] "RemoveContainer" containerID="1805ac619773501a73de490092b5050dc1a1af5fc129fdb4ecc00e28689a3f0f" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.826175 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.859526 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.888160 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:43:04 crc kubenswrapper[4787]: E0219 19:43:04.888596 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-log" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.888626 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-log" Feb 19 19:43:04 crc kubenswrapper[4787]: E0219 19:43:04.888667 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-httpd" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.888675 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-httpd" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.888893 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-httpd" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.888924 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" containerName="glance-log" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.890224 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.892958 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.894157 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.895509 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.935883 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2039e8f8-5048-488a-bc3a-82a3d4389943" path="/var/lib/kubelet/pods/2039e8f8-5048-488a-bc3a-82a3d4389943/volumes" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.936900 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2440e80f-a370-4420-866d-7059aa04e7b3" path="/var/lib/kubelet/pods/2440e80f-a370-4420-866d-7059aa04e7b3/volumes" Feb 19 19:43:04 crc kubenswrapper[4787]: I0219 19:43:04.986774 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.029758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a841286-0da4-4bd8-96d3-d7cb751bbafb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.029840 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.029885 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.029919 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8px\" (UniqueName: \"kubernetes.io/projected/6a841286-0da4-4bd8-96d3-d7cb751bbafb-kube-api-access-gf8px\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.029977 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a841286-0da4-4bd8-96d3-d7cb751bbafb-logs\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.030040 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.030106 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.030170 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.136014 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.136054 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/039867c40e5afa02457a0a3be85bbd756f1b9686f252b64fbed40b83ae37d83d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.139546 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.139736 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.139786 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8px\" (UniqueName: \"kubernetes.io/projected/6a841286-0da4-4bd8-96d3-d7cb751bbafb-kube-api-access-gf8px\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.142468 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a841286-0da4-4bd8-96d3-d7cb751bbafb-logs\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.142601 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.142727 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.142833 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.142937 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a841286-0da4-4bd8-96d3-d7cb751bbafb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.143375 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a841286-0da4-4bd8-96d3-d7cb751bbafb-logs\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.179530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a841286-0da4-4bd8-96d3-d7cb751bbafb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.180468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.180539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.181806 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.183759 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a841286-0da4-4bd8-96d3-d7cb751bbafb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.183981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8px\" (UniqueName: \"kubernetes.io/projected/6a841286-0da4-4bd8-96d3-d7cb751bbafb-kube-api-access-gf8px\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.226402 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b8330a44-b8d7-4368-be14-79ea23bc03a2\") pod \"glance-default-internal-api-0\" (UID: \"6a841286-0da4-4bd8-96d3-d7cb751bbafb\") " pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.415838 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:05 crc kubenswrapper[4787]: W0219 19:43:05.613903 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a3964b_540d_4b05_a2ad_b39a87d44a3a.slice/crio-faab9655c8886b8e593dcca7582bc31273e035ebdb7f4971ea4a7ae2afc8bf56 WatchSource:0}: Error finding container faab9655c8886b8e593dcca7582bc31273e035ebdb7f4971ea4a7ae2afc8bf56: Status 404 returned error can't find the container with id faab9655c8886b8e593dcca7582bc31273e035ebdb7f4971ea4a7ae2afc8bf56 Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.614740 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.758275 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1a3964b-540d-4b05-a2ad-b39a87d44a3a","Type":"ContainerStarted","Data":"faab9655c8886b8e593dcca7582bc31273e035ebdb7f4971ea4a7ae2afc8bf56"} Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.779745 4787 generic.go:334] "Generic (PLEG): container finished" podID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerID="addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c" exitCode=2 Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.779786 4787 generic.go:334] "Generic (PLEG): container finished" podID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerID="213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89" exitCode=0 Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.779810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerDied","Data":"addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c"} Feb 19 19:43:05 crc kubenswrapper[4787]: I0219 19:43:05.779843 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerDied","Data":"213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89"} Feb 19 19:43:06 crc kubenswrapper[4787]: I0219 19:43:06.043434 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 19:43:06 crc kubenswrapper[4787]: W0219 19:43:06.056663 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a841286_0da4_4bd8_96d3_d7cb751bbafb.slice/crio-166fe6a8bde5f0b961ef1c4f7b89201240f8b0fb87559f9d896f8c6703de2b08 WatchSource:0}: Error finding container 166fe6a8bde5f0b961ef1c4f7b89201240f8b0fb87559f9d896f8c6703de2b08: Status 404 returned error can't find the container with id 166fe6a8bde5f0b961ef1c4f7b89201240f8b0fb87559f9d896f8c6703de2b08 Feb 19 19:43:06 crc kubenswrapper[4787]: I0219 19:43:06.815813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1a3964b-540d-4b05-a2ad-b39a87d44a3a","Type":"ContainerStarted","Data":"924feee4bd2ccdad3cac194bd935484efad6834651a5ef4fcf77cf0300fafdac"} Feb 19 19:43:06 crc kubenswrapper[4787]: I0219 19:43:06.842805 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6a841286-0da4-4bd8-96d3-d7cb751bbafb","Type":"ContainerStarted","Data":"54a2e0cc632ebf06df0eff32c9d5bbd691088c56c29b32c5c6fc216d76831580"} Feb 19 19:43:06 crc kubenswrapper[4787]: I0219 19:43:06.842853 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6a841286-0da4-4bd8-96d3-d7cb751bbafb","Type":"ContainerStarted","Data":"166fe6a8bde5f0b961ef1c4f7b89201240f8b0fb87559f9d896f8c6703de2b08"} Feb 19 19:43:07 crc kubenswrapper[4787]: I0219 19:43:07.856128 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6a841286-0da4-4bd8-96d3-d7cb751bbafb","Type":"ContainerStarted","Data":"1cf561cc5592735034853fba12bc40ceefa01e5e8225a5470069798bd5d33b8c"} Feb 19 19:43:07 crc kubenswrapper[4787]: I0219 19:43:07.857718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e1a3964b-540d-4b05-a2ad-b39a87d44a3a","Type":"ContainerStarted","Data":"4f7feb87a6ca900ebc1d1672435603cc136a486725d3693eb19706c49410bb63"} Feb 19 19:43:07 crc kubenswrapper[4787]: I0219 19:43:07.884807 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.884786753 podStartE2EDuration="3.884786753s" podCreationTimestamp="2026-02-19 19:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:07.875016035 +0000 UTC m=+1455.665681987" watchObservedRunningTime="2026-02-19 19:43:07.884786753 +0000 UTC m=+1455.675452705" Feb 19 19:43:07 crc kubenswrapper[4787]: I0219 19:43:07.928238 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.928194581 podStartE2EDuration="4.928194581s" podCreationTimestamp="2026-02-19 19:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:07.898961928 +0000 UTC m=+1455.689627870" watchObservedRunningTime="2026-02-19 19:43:07.928194581 +0000 UTC m=+1455.718860523" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.285137 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.353178 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66f69cc4bc-nvshc"] Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.395562 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.494749 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f4987686d-bgsdk"] Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.883305 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.883965 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.884514 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" event={"ID":"80787ea1-f265-44c5-a36b-e644b4472493","Type":"ContainerDied","Data":"770c6c47449743acb8f3f4076b778314a44e2f68ce819e8a0cdef2a61b25c910"} Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.884580 4787 scope.go:117] "RemoveContainer" containerID="bd8f9ffd87300ec69adf3328c70b4bd3564bc7bdd9e7cc349dedd44c2bbc860c" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.956291 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-combined-ca-bundle\") pod \"80787ea1-f265-44c5-a36b-e644b4472493\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.956352 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data-custom\") pod \"80787ea1-f265-44c5-a36b-e644b4472493\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.956401 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8bcl\" (UniqueName: \"kubernetes.io/projected/80787ea1-f265-44c5-a36b-e644b4472493-kube-api-access-l8bcl\") pod \"80787ea1-f265-44c5-a36b-e644b4472493\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.956523 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data\") pod \"80787ea1-f265-44c5-a36b-e644b4472493\" (UID: \"80787ea1-f265-44c5-a36b-e644b4472493\") " Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.968292 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9749b6cff-95bk5"] Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.968490 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-9749b6cff-95bk5" podUID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" containerName="heat-engine" containerID="cri-o://9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" gracePeriod=60 Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.994414 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80787ea1-f265-44c5-a36b-e644b4472493-kube-api-access-l8bcl" (OuterVolumeSpecName: "kube-api-access-l8bcl") pod "80787ea1-f265-44c5-a36b-e644b4472493" (UID: "80787ea1-f265-44c5-a36b-e644b4472493"). InnerVolumeSpecName "kube-api-access-l8bcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:08 crc kubenswrapper[4787]: I0219 19:43:08.994749 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "80787ea1-f265-44c5-a36b-e644b4472493" (UID: "80787ea1-f265-44c5-a36b-e644b4472493"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.006045 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80787ea1-f265-44c5-a36b-e644b4472493" (UID: "80787ea1-f265-44c5-a36b-e644b4472493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.038300 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data" (OuterVolumeSpecName: "config-data") pod "80787ea1-f265-44c5-a36b-e644b4472493" (UID: "80787ea1-f265-44c5-a36b-e644b4472493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.061261 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.061302 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8bcl\" (UniqueName: \"kubernetes.io/projected/80787ea1-f265-44c5-a36b-e644b4472493-kube-api-access-l8bcl\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.061317 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.061336 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80787ea1-f265-44c5-a36b-e644b4472493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.118019 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.162268 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data\") pod \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.162534 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-combined-ca-bundle\") pod \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.162601 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data-custom\") pod \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.162654 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pg4k\" (UniqueName: \"kubernetes.io/projected/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-kube-api-access-7pg4k\") pod \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\" (UID: \"d3ea81b9-beb0-484c-ba3c-b7b1364d179f\") " Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.166435 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d3ea81b9-beb0-484c-ba3c-b7b1364d179f" (UID: "d3ea81b9-beb0-484c-ba3c-b7b1364d179f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.166813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-kube-api-access-7pg4k" (OuterVolumeSpecName: "kube-api-access-7pg4k") pod "d3ea81b9-beb0-484c-ba3c-b7b1364d179f" (UID: "d3ea81b9-beb0-484c-ba3c-b7b1364d179f"). InnerVolumeSpecName "kube-api-access-7pg4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.203594 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ea81b9-beb0-484c-ba3c-b7b1364d179f" (UID: "d3ea81b9-beb0-484c-ba3c-b7b1364d179f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.233092 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data" (OuterVolumeSpecName: "config-data") pod "d3ea81b9-beb0-484c-ba3c-b7b1364d179f" (UID: "d3ea81b9-beb0-484c-ba3c-b7b1364d179f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.263660 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.263737 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.265087 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.265166 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.265180 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pg4k\" (UniqueName: \"kubernetes.io/projected/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-kube-api-access-7pg4k\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.265196 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ea81b9-beb0-484c-ba3c-b7b1364d179f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.897377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f4987686d-bgsdk" event={"ID":"d3ea81b9-beb0-484c-ba3c-b7b1364d179f","Type":"ContainerDied","Data":"322b3d58502c1c65cb93e271d3439c218cd95da8256b15f5f98b75bce7522f55"} Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.897414 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f4987686d-bgsdk" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.897435 4787 scope.go:117] "RemoveContainer" containerID="89b05fcf9a79aa317af3c0ef1a789a4be84543ab909f43e63a939460f664e5f3" Feb 19 19:43:09 crc kubenswrapper[4787]: I0219 19:43:09.900037 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66f69cc4bc-nvshc" Feb 19 19:43:10 crc kubenswrapper[4787]: I0219 19:43:10.001992 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66f69cc4bc-nvshc"] Feb 19 19:43:10 crc kubenswrapper[4787]: I0219 19:43:10.012697 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-66f69cc4bc-nvshc"] Feb 19 19:43:10 crc kubenswrapper[4787]: I0219 19:43:10.056276 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-f4987686d-bgsdk"] Feb 19 19:43:10 crc kubenswrapper[4787]: I0219 19:43:10.072642 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-f4987686d-bgsdk"] Feb 19 19:43:10 crc kubenswrapper[4787]: I0219 19:43:10.912110 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80787ea1-f265-44c5-a36b-e644b4472493" path="/var/lib/kubelet/pods/80787ea1-f265-44c5-a36b-e644b4472493/volumes" Feb 19 19:43:10 crc kubenswrapper[4787]: I0219 19:43:10.913129 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" path="/var/lib/kubelet/pods/d3ea81b9-beb0-484c-ba3c-b7b1364d179f/volumes" Feb 19 19:43:11 crc kubenswrapper[4787]: E0219 19:43:11.808400 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 19:43:11 crc kubenswrapper[4787]: E0219 19:43:11.813086 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 19:43:11 crc kubenswrapper[4787]: E0219 19:43:11.814540 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 19:43:11 crc kubenswrapper[4787]: E0219 19:43:11.814575 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-9749b6cff-95bk5" podUID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" containerName="heat-engine" Feb 19 19:43:13 crc kubenswrapper[4787]: I0219 19:43:13.970524 4787 generic.go:334] "Generic (PLEG): container finished" podID="18009b64-0e4a-438d-9a5e-7619312865aa" containerID="26b99dd13fb552e884facb81083ff2266f2962a36b1f79e5ae8d6ade0af05be2" exitCode=0 Feb 19 19:43:13 crc kubenswrapper[4787]: I0219 19:43:13.970694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" event={"ID":"18009b64-0e4a-438d-9a5e-7619312865aa","Type":"ContainerDied","Data":"26b99dd13fb552e884facb81083ff2266f2962a36b1f79e5ae8d6ade0af05be2"} Feb 19 19:43:14 crc kubenswrapper[4787]: I0219 19:43:14.905940 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 19:43:14 crc kubenswrapper[4787]: I0219 19:43:14.906253 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 19:43:14 crc kubenswrapper[4787]: I0219 19:43:14.935249 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 19:43:14 crc kubenswrapper[4787]: I0219 19:43:14.944294 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 19:43:14 crc kubenswrapper[4787]: I0219 19:43:14.981649 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 19:43:14 crc kubenswrapper[4787]: I0219 19:43:14.981692 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.417365 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.417706 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.457160 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.471838 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.493973 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.638308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-config-data\") pod \"18009b64-0e4a-438d-9a5e-7619312865aa\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.638512 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-combined-ca-bundle\") pod \"18009b64-0e4a-438d-9a5e-7619312865aa\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.638740 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c5tk\" (UniqueName: \"kubernetes.io/projected/18009b64-0e4a-438d-9a5e-7619312865aa-kube-api-access-2c5tk\") pod \"18009b64-0e4a-438d-9a5e-7619312865aa\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.638819 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-scripts\") pod \"18009b64-0e4a-438d-9a5e-7619312865aa\" (UID: \"18009b64-0e4a-438d-9a5e-7619312865aa\") " Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.663905 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18009b64-0e4a-438d-9a5e-7619312865aa-kube-api-access-2c5tk" (OuterVolumeSpecName: "kube-api-access-2c5tk") pod "18009b64-0e4a-438d-9a5e-7619312865aa" (UID: "18009b64-0e4a-438d-9a5e-7619312865aa"). InnerVolumeSpecName "kube-api-access-2c5tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.667067 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-scripts" (OuterVolumeSpecName: "scripts") pod "18009b64-0e4a-438d-9a5e-7619312865aa" (UID: "18009b64-0e4a-438d-9a5e-7619312865aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.692214 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-config-data" (OuterVolumeSpecName: "config-data") pod "18009b64-0e4a-438d-9a5e-7619312865aa" (UID: "18009b64-0e4a-438d-9a5e-7619312865aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.696863 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18009b64-0e4a-438d-9a5e-7619312865aa" (UID: "18009b64-0e4a-438d-9a5e-7619312865aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.749468 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.749512 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c5tk\" (UniqueName: \"kubernetes.io/projected/18009b64-0e4a-438d-9a5e-7619312865aa-kube-api-access-2c5tk\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.749528 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.749538 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18009b64-0e4a-438d-9a5e-7619312865aa-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.994631 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" event={"ID":"18009b64-0e4a-438d-9a5e-7619312865aa","Type":"ContainerDied","Data":"3bb951a0e8f4f5ba8429f32719f818eacb31a76bdc5b7ef07e51a14a0a67b2cb"} Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.994688 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb951a0e8f4f5ba8429f32719f818eacb31a76bdc5b7ef07e51a14a0a67b2cb" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.994718 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rz4fd" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.995282 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:15 crc kubenswrapper[4787]: I0219 19:43:15.995668 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.111318 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 19:43:16 crc kubenswrapper[4787]: E0219 19:43:16.111957 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80787ea1-f265-44c5-a36b-e644b4472493" containerName="heat-cfnapi" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.111984 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="80787ea1-f265-44c5-a36b-e644b4472493" containerName="heat-cfnapi" Feb 19 19:43:16 crc kubenswrapper[4787]: E0219 19:43:16.112002 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18009b64-0e4a-438d-9a5e-7619312865aa" containerName="nova-cell0-conductor-db-sync" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112011 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="18009b64-0e4a-438d-9a5e-7619312865aa" containerName="nova-cell0-conductor-db-sync" Feb 19 19:43:16 crc kubenswrapper[4787]: E0219 19:43:16.112043 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerName="heat-api" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112054 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerName="heat-api" Feb 19 19:43:16 crc kubenswrapper[4787]: E0219 19:43:16.112069 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerName="heat-api" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112076 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerName="heat-api" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112359 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerName="heat-api" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112386 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ea81b9-beb0-484c-ba3c-b7b1364d179f" containerName="heat-api" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112421 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="18009b64-0e4a-438d-9a5e-7619312865aa" containerName="nova-cell0-conductor-db-sync" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112434 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="80787ea1-f265-44c5-a36b-e644b4472493" containerName="heat-cfnapi" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.112443 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="80787ea1-f265-44c5-a36b-e644b4472493" containerName="heat-cfnapi" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.113545 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.115888 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-gcvzq" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.117091 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.127466 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.260365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mqw\" (UniqueName: \"kubernetes.io/projected/64c53e31-72f6-4357-a3b0-dabedc9b834e-kube-api-access-b4mqw\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.260463 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c53e31-72f6-4357-a3b0-dabedc9b834e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.260621 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c53e31-72f6-4357-a3b0-dabedc9b834e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.363004 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mqw\" (UniqueName: \"kubernetes.io/projected/64c53e31-72f6-4357-a3b0-dabedc9b834e-kube-api-access-b4mqw\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.363159 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c53e31-72f6-4357-a3b0-dabedc9b834e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.363224 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c53e31-72f6-4357-a3b0-dabedc9b834e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.367805 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c53e31-72f6-4357-a3b0-dabedc9b834e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.367864 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c53e31-72f6-4357-a3b0-dabedc9b834e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.386921 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mqw\" (UniqueName: \"kubernetes.io/projected/64c53e31-72f6-4357-a3b0-dabedc9b834e-kube-api-access-b4mqw\") pod \"nova-cell0-conductor-0\" (UID: \"64c53e31-72f6-4357-a3b0-dabedc9b834e\") " pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:16 crc kubenswrapper[4787]: I0219 19:43:16.439782 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:17 crc kubenswrapper[4787]: I0219 19:43:17.027400 4787 generic.go:334] "Generic (PLEG): container finished" podID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerID="86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375" exitCode=0 Feb 19 19:43:17 crc kubenswrapper[4787]: I0219 19:43:17.027442 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerDied","Data":"86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375"} Feb 19 19:43:17 crc kubenswrapper[4787]: I0219 19:43:17.198475 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.039767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"64c53e31-72f6-4357-a3b0-dabedc9b834e","Type":"ContainerStarted","Data":"803913c36046e6d7d136fa9bb0b0502b36ff99631fc30c1c62f8c37d2d3d7fa0"} Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.040093 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"64c53e31-72f6-4357-a3b0-dabedc9b834e","Type":"ContainerStarted","Data":"219c21b2fe24e15de767c8345a2bd60de55e50f235f56132b4eb99b292819c31"} Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.041506 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.072379 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.072355335 podStartE2EDuration="2.072355335s" podCreationTimestamp="2026-02-19 19:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:18.054711622 +0000 UTC m=+1465.845377564" watchObservedRunningTime="2026-02-19 19:43:18.072355335 +0000 UTC m=+1465.863021277" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.790058 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.793591 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.793758 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.853815 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkfl\" (UniqueName: \"kubernetes.io/projected/5c223cd7-3eba-4f11-8f82-5ef5a479daee-kube-api-access-ptkfl\") pod \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.853918 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data-custom\") pod \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.853986 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data\") pod \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.854138 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-combined-ca-bundle\") pod \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\" (UID: \"5c223cd7-3eba-4f11-8f82-5ef5a479daee\") " Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.863704 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c223cd7-3eba-4f11-8f82-5ef5a479daee" (UID: "5c223cd7-3eba-4f11-8f82-5ef5a479daee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.868842 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c223cd7-3eba-4f11-8f82-5ef5a479daee-kube-api-access-ptkfl" (OuterVolumeSpecName: "kube-api-access-ptkfl") pod "5c223cd7-3eba-4f11-8f82-5ef5a479daee" (UID: "5c223cd7-3eba-4f11-8f82-5ef5a479daee"). InnerVolumeSpecName "kube-api-access-ptkfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.962738 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptkfl\" (UniqueName: \"kubernetes.io/projected/5c223cd7-3eba-4f11-8f82-5ef5a479daee-kube-api-access-ptkfl\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:18 crc kubenswrapper[4787]: I0219 19:43:18.962822 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:18.994986 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c223cd7-3eba-4f11-8f82-5ef5a479daee" (UID: "5c223cd7-3eba-4f11-8f82-5ef5a479daee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.021110 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data" (OuterVolumeSpecName: "config-data") pod "5c223cd7-3eba-4f11-8f82-5ef5a479daee" (UID: "5c223cd7-3eba-4f11-8f82-5ef5a479daee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.050400 4787 generic.go:334] "Generic (PLEG): container finished" podID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" exitCode=0 Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.050492 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9749b6cff-95bk5" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.050512 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9749b6cff-95bk5" event={"ID":"5c223cd7-3eba-4f11-8f82-5ef5a479daee","Type":"ContainerDied","Data":"9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752"} Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.052383 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9749b6cff-95bk5" event={"ID":"5c223cd7-3eba-4f11-8f82-5ef5a479daee","Type":"ContainerDied","Data":"bcaa2c67faaf51e8978af9ddff3f3634796f5e9c4e5bec7966a1ecd87fd2d215"} Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.052473 4787 scope.go:117] "RemoveContainer" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.072601 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.072645 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c223cd7-3eba-4f11-8f82-5ef5a479daee-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.086974 4787 scope.go:117] "RemoveContainer" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" Feb 19 19:43:19 crc kubenswrapper[4787]: E0219 19:43:19.087526 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752\": container with ID starting with 9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752 not found: ID does not exist" containerID="9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.087576 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752"} err="failed to get container status \"9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752\": rpc error: code = NotFound desc = could not find container \"9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752\": container with ID starting with 9468f0041f90c186004c03ec2944edb5d70df039b5511a100ff58758d42ce752 not found: ID does not exist" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.109989 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9749b6cff-95bk5"] Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.120056 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-9749b6cff-95bk5"] Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.298430 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.680114 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.680257 4787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 19:43:19 crc kubenswrapper[4787]: I0219 19:43:19.698453 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 19:43:20 crc kubenswrapper[4787]: I0219 19:43:20.903725 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" path="/var/lib/kubelet/pods/5c223cd7-3eba-4f11-8f82-5ef5a479daee/volumes" Feb 19 19:43:26 crc kubenswrapper[4787]: I0219 19:43:26.518730 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.299317 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kn2fb"] Feb 19 19:43:27 crc kubenswrapper[4787]: E0219 19:43:27.300819 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80787ea1-f265-44c5-a36b-e644b4472493" containerName="heat-cfnapi" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.300840 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="80787ea1-f265-44c5-a36b-e644b4472493" containerName="heat-cfnapi" Feb 19 19:43:27 crc kubenswrapper[4787]: E0219 19:43:27.300854 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" containerName="heat-engine" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.300860 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" containerName="heat-engine" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.301092 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c223cd7-3eba-4f11-8f82-5ef5a479daee" containerName="heat-engine" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.301896 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.303780 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.307987 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.327349 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn2fb"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.413314 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hcqc\" (UniqueName: \"kubernetes.io/projected/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-kube-api-access-8hcqc\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.413388 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.413560 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-scripts\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.413595 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-config-data\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.476967 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.478799 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.498080 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.512769 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.515427 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-scripts\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.515483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-config-data\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.515648 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hcqc\" (UniqueName: \"kubernetes.io/projected/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-kube-api-access-8hcqc\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.531882 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.548196 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.552694 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-scripts\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.557191 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.564203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-config-data\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.573262 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.573632 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.595741 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.599434 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hcqc\" (UniqueName: \"kubernetes.io/projected/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-kube-api-access-8hcqc\") pod \"nova-cell0-cell-mapping-kn2fb\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.635569 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstkq\" (UniqueName: \"kubernetes.io/projected/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-kube-api-access-nstkq\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.636007 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-logs\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.636158 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.665304 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-config-data\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.666424 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.689007 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.690492 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.696503 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.718302 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.767791 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nstkq\" (UniqueName: \"kubernetes.io/projected/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-kube-api-access-nstkq\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.767841 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165db7d1-424a-47a1-919b-7d5a30145b21-logs\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.767904 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-logs\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.767990 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-config-data\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.768046 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.768082 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck85s\" (UniqueName: \"kubernetes.io/projected/165db7d1-424a-47a1-919b-7d5a30145b21-kube-api-access-ck85s\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.768139 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-config-data\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.768195 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.769101 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-logs\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.794427 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-config-data\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.795196 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.798674 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7wmdr"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.810200 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.820250 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nstkq\" (UniqueName: \"kubernetes.io/projected/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-kube-api-access-nstkq\") pod \"nova-api-0\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " pod="openstack/nova-api-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.827406 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7wmdr"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.870738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165db7d1-424a-47a1-919b-7d5a30145b21-logs\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.870815 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.870887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-config-data\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.870922 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlb8\" (UniqueName: \"kubernetes.io/projected/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-kube-api-access-njlb8\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.870959 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck85s\" (UniqueName: \"kubernetes.io/projected/165db7d1-424a-47a1-919b-7d5a30145b21-kube-api-access-ck85s\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.871034 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.871060 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-config-data\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.872048 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165db7d1-424a-47a1-919b-7d5a30145b21-logs\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.886696 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.888910 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.898414 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.898567 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.901869 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck85s\" (UniqueName: \"kubernetes.io/projected/165db7d1-424a-47a1-919b-7d5a30145b21-kube-api-access-ck85s\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.910907 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-config-data\") pod \"nova-metadata-0\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.927415 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.942133 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.943729 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.972759 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.972863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-config-data\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.972892 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.972984 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.973034 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-config\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.973065 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvcn\" (UniqueName: \"kubernetes.io/projected/acb65c9d-9b2c-4294-9255-3707b9582008-kube-api-access-4zvcn\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.973093 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.973133 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njlb8\" (UniqueName: \"kubernetes.io/projected/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-kube-api-access-njlb8\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.973163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.979409 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-config-data\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.981484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:27 crc kubenswrapper[4787]: I0219 19:43:27.989177 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njlb8\" (UniqueName: \"kubernetes.io/projected/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-kube-api-access-njlb8\") pod \"nova-scheduler-0\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075301 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075379 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075427 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075487 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-config\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075508 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299ww\" (UniqueName: \"kubernetes.io/projected/c783ce0e-0afa-4137-985b-9a1c584070e6-kube-api-access-299ww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075544 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvcn\" (UniqueName: \"kubernetes.io/projected/acb65c9d-9b2c-4294-9255-3707b9582008-kube-api-access-4zvcn\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075574 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075664 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.075731 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.076563 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-config\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.076650 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.077640 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.077947 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.078302 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.094001 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvcn\" (UniqueName: \"kubernetes.io/projected/acb65c9d-9b2c-4294-9255-3707b9582008-kube-api-access-4zvcn\") pod \"dnsmasq-dns-9b86998b5-7wmdr\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.114216 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.187061 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299ww\" (UniqueName: \"kubernetes.io/projected/c783ce0e-0afa-4137-985b-9a1c584070e6-kube-api-access-299ww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.187559 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.187628 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.200530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.208227 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.214554 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299ww\" (UniqueName: \"kubernetes.io/projected/c783ce0e-0afa-4137-985b-9a1c584070e6-kube-api-access-299ww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.259711 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.270361 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.285786 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.338674 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn2fb"] Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.651511 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:43:28 crc kubenswrapper[4787]: I0219 19:43:28.847558 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.163984 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.224266 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfd951d7-0a54-44d0-88e4-6a9d22bf2589","Type":"ContainerStarted","Data":"d8750c350071efc4db443b18603abe4aee83c6268a21ace66fb36851bcbf2861"} Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.233084 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"165db7d1-424a-47a1-919b-7d5a30145b21","Type":"ContainerStarted","Data":"cf54265441ba47d3a01d361e15bc9a1af5a3dbc650c63fd1f0fada291a43bcf2"} Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.252287 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e614e88b-cc22-4e79-9dd5-fe0fe47e4372","Type":"ContainerStarted","Data":"14d40a2a1d8aaa5c8ef9599f1d1c43755b86e01e3bf8fff6a7d73a10e58d6612"} Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.269759 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn2fb" event={"ID":"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0","Type":"ContainerStarted","Data":"6f15087f35d3cba1ad843fbf5e9b04ff128b0e2b507153ee1fb1b0c2ecdabc2c"} Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.269808 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn2fb" event={"ID":"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0","Type":"ContainerStarted","Data":"2f5adedfcb5b3fd2a28f19cccd59ea1d558370291267709efbbc94210ac6cdae"} Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.299228 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kn2fb" podStartSLOduration=2.299211256 podStartE2EDuration="2.299211256s" podCreationTimestamp="2026-02-19 19:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:29.290439466 +0000 UTC m=+1477.081105428" watchObservedRunningTime="2026-02-19 19:43:29.299211256 +0000 UTC m=+1477.089877198" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.312800 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:43:29 crc kubenswrapper[4787]: W0219 19:43:29.317363 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc783ce0e_0afa_4137_985b_9a1c584070e6.slice/crio-2d7af8e5d6705c82a32ea2817c902c929e0b51294d690db2c8305b076013d460 WatchSource:0}: Error finding container 2d7af8e5d6705c82a32ea2817c902c929e0b51294d690db2c8305b076013d460: Status 404 returned error can't find the container with id 2d7af8e5d6705c82a32ea2817c902c929e0b51294d690db2c8305b076013d460 Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.351191 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7wmdr"] Feb 19 19:43:29 crc kubenswrapper[4787]: W0219 19:43:29.367840 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacb65c9d_9b2c_4294_9255_3707b9582008.slice/crio-55f3b9d3103cacab24ac351384f048bd1cb346e9d3b100f34f877154605eec39 WatchSource:0}: Error finding container 55f3b9d3103cacab24ac351384f048bd1cb346e9d3b100f34f877154605eec39: Status 404 returned error can't find the container with id 55f3b9d3103cacab24ac351384f048bd1cb346e9d3b100f34f877154605eec39 Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.541693 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2hq2k"] Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.543671 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.557853 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.558074 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.584689 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2hq2k"] Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.667051 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-config-data\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.667125 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.667220 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjfwh\" (UniqueName: \"kubernetes.io/projected/a96d0657-656e-4614-bac3-490e595478dd-kube-api-access-tjfwh\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.667288 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-scripts\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.769888 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-scripts\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.770337 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-config-data\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.770422 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.770559 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjfwh\" (UniqueName: \"kubernetes.io/projected/a96d0657-656e-4614-bac3-490e595478dd-kube-api-access-tjfwh\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.779827 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-scripts\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.780506 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.780538 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-config-data\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.796033 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjfwh\" (UniqueName: \"kubernetes.io/projected/a96d0657-656e-4614-bac3-490e595478dd-kube-api-access-tjfwh\") pod \"nova-cell1-conductor-db-sync-2hq2k\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:29 crc kubenswrapper[4787]: I0219 19:43:29.965490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:30 crc kubenswrapper[4787]: I0219 19:43:30.297730 4787 generic.go:334] "Generic (PLEG): container finished" podID="acb65c9d-9b2c-4294-9255-3707b9582008" containerID="bd75f0b4fcf65d5a1e99195daa8467fa687d23fbb7f905de39c506bbe6eb8d3a" exitCode=0 Feb 19 19:43:30 crc kubenswrapper[4787]: I0219 19:43:30.298099 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" event={"ID":"acb65c9d-9b2c-4294-9255-3707b9582008","Type":"ContainerDied","Data":"bd75f0b4fcf65d5a1e99195daa8467fa687d23fbb7f905de39c506bbe6eb8d3a"} Feb 19 19:43:30 crc kubenswrapper[4787]: I0219 19:43:30.298129 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" event={"ID":"acb65c9d-9b2c-4294-9255-3707b9582008","Type":"ContainerStarted","Data":"55f3b9d3103cacab24ac351384f048bd1cb346e9d3b100f34f877154605eec39"} Feb 19 19:43:30 crc kubenswrapper[4787]: I0219 19:43:30.317039 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c783ce0e-0afa-4137-985b-9a1c584070e6","Type":"ContainerStarted","Data":"2d7af8e5d6705c82a32ea2817c902c929e0b51294d690db2c8305b076013d460"} Feb 19 19:43:30 crc kubenswrapper[4787]: I0219 19:43:30.539829 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2hq2k"] Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.357850 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" event={"ID":"acb65c9d-9b2c-4294-9255-3707b9582008","Type":"ContainerStarted","Data":"c47960dc265711777ca13539eee91672fcf10adedcd146de619bc3264880052a"} Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.358832 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.370357 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" event={"ID":"a96d0657-656e-4614-bac3-490e595478dd","Type":"ContainerStarted","Data":"63e9085b9f6f51a0a6f2bbc5ae47da7024e774b61a692b2ea62658123466dbb7"} Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.370409 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" event={"ID":"a96d0657-656e-4614-bac3-490e595478dd","Type":"ContainerStarted","Data":"07527dc238bf3d2297c9f28336e7d6d0902e940cc67f5ed5cf32b3248f16030d"} Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.383233 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" podStartSLOduration=4.383210799 podStartE2EDuration="4.383210799s" podCreationTimestamp="2026-02-19 19:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:31.378975248 +0000 UTC m=+1479.169641190" watchObservedRunningTime="2026-02-19 19:43:31.383210799 +0000 UTC m=+1479.173876741" Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.399366 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" podStartSLOduration=2.399346649 podStartE2EDuration="2.399346649s" podCreationTimestamp="2026-02-19 19:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:31.393991986 +0000 UTC m=+1479.184657918" watchObservedRunningTime="2026-02-19 19:43:31.399346649 +0000 UTC m=+1479.190012591" Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.748426 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:43:31 crc kubenswrapper[4787]: I0219 19:43:31.773445 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.432773 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfd951d7-0a54-44d0-88e4-6a9d22bf2589","Type":"ContainerStarted","Data":"b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468"} Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.437237 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"165db7d1-424a-47a1-919b-7d5a30145b21","Type":"ContainerStarted","Data":"12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4"} Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.437337 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"165db7d1-424a-47a1-919b-7d5a30145b21","Type":"ContainerStarted","Data":"48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d"} Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.437346 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-metadata" containerID="cri-o://12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4" gracePeriod=30 Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.437231 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-log" containerID="cri-o://48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d" gracePeriod=30 Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.445827 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e614e88b-cc22-4e79-9dd5-fe0fe47e4372","Type":"ContainerStarted","Data":"d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721"} Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.447672 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c783ce0e-0afa-4137-985b-9a1c584070e6","Type":"ContainerStarted","Data":"d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2"} Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.447824 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c783ce0e-0afa-4137-985b-9a1c584070e6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2" gracePeriod=30 Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.454148 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.048056196 podStartE2EDuration="7.454133569s" podCreationTimestamp="2026-02-19 19:43:27 +0000 UTC" firstStartedPulling="2026-02-19 19:43:29.209516609 +0000 UTC m=+1477.000182551" lastFinishedPulling="2026-02-19 19:43:33.615593982 +0000 UTC m=+1481.406259924" observedRunningTime="2026-02-19 19:43:34.452633036 +0000 UTC m=+1482.243298978" watchObservedRunningTime="2026-02-19 19:43:34.454133569 +0000 UTC m=+1482.244799501" Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.483594 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.544309863 podStartE2EDuration="7.483575808s" podCreationTimestamp="2026-02-19 19:43:27 +0000 UTC" firstStartedPulling="2026-02-19 19:43:28.681497495 +0000 UTC m=+1476.472163437" lastFinishedPulling="2026-02-19 19:43:33.62076344 +0000 UTC m=+1481.411429382" observedRunningTime="2026-02-19 19:43:34.468995033 +0000 UTC m=+1482.259660975" watchObservedRunningTime="2026-02-19 19:43:34.483575808 +0000 UTC m=+1482.274241750" Feb 19 19:43:34 crc kubenswrapper[4787]: I0219 19:43:34.497349 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.204413823 podStartE2EDuration="7.497332171s" podCreationTimestamp="2026-02-19 19:43:27 +0000 UTC" firstStartedPulling="2026-02-19 19:43:29.322682205 +0000 UTC m=+1477.113348147" lastFinishedPulling="2026-02-19 19:43:33.615600553 +0000 UTC m=+1481.406266495" observedRunningTime="2026-02-19 19:43:34.490569988 +0000 UTC m=+1482.281235930" watchObservedRunningTime="2026-02-19 19:43:34.497332171 +0000 UTC m=+1482.287998113" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.366232 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444457 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-log-httpd\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444520 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf5n7\" (UniqueName: \"kubernetes.io/projected/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-kube-api-access-rf5n7\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444572 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-scripts\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444696 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-sg-core-conf-yaml\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444758 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-config-data\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444844 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-run-httpd\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.444878 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-combined-ca-bundle\") pod \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\" (UID: \"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c\") " Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.446408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.449228 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.457518 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-kube-api-access-rf5n7" (OuterVolumeSpecName: "kube-api-access-rf5n7") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "kube-api-access-rf5n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.466705 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-scripts" (OuterVolumeSpecName: "scripts") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.480674 4787 generic.go:334] "Generic (PLEG): container finished" podID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerID="8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1" exitCode=137 Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.480786 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerDied","Data":"8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1"} Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.480820 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c","Type":"ContainerDied","Data":"4f9a9f512d47197b2f008f641747df5601b92869f42a496c5cf4234a31a12425"} Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.480840 4787 scope.go:117] "RemoveContainer" containerID="8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.481022 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.488895 4787 generic.go:334] "Generic (PLEG): container finished" podID="165db7d1-424a-47a1-919b-7d5a30145b21" containerID="48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d" exitCode=143 Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.489044 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"165db7d1-424a-47a1-919b-7d5a30145b21","Type":"ContainerDied","Data":"48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d"} Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.493743 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e614e88b-cc22-4e79-9dd5-fe0fe47e4372","Type":"ContainerStarted","Data":"e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478"} Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.496677 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.532090 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.769103792 podStartE2EDuration="8.532072651s" podCreationTimestamp="2026-02-19 19:43:27 +0000 UTC" firstStartedPulling="2026-02-19 19:43:28.852707876 +0000 UTC m=+1476.643373818" lastFinishedPulling="2026-02-19 19:43:33.615676725 +0000 UTC m=+1481.406342677" observedRunningTime="2026-02-19 19:43:35.511174355 +0000 UTC m=+1483.301840297" watchObservedRunningTime="2026-02-19 19:43:35.532072651 +0000 UTC m=+1483.322738593" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.547992 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.548023 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.548032 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.548041 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf5n7\" (UniqueName: \"kubernetes.io/projected/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-kube-api-access-rf5n7\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.548050 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.554652 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.575214 4787 scope.go:117] "RemoveContainer" containerID="addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.603409 4787 scope.go:117] "RemoveContainer" containerID="213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.642689 4787 scope.go:117] "RemoveContainer" containerID="86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.651375 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.683972 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-config-data" (OuterVolumeSpecName: "config-data") pod "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" (UID: "125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.754640 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.770647 4787 scope.go:117] "RemoveContainer" containerID="8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.771224 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1\": container with ID starting with 8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1 not found: ID does not exist" containerID="8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.771266 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1"} err="failed to get container status \"8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1\": rpc error: code = NotFound desc = could not find container \"8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1\": container with ID starting with 8a50cd22c1401371bb792908e7d0e1f7c98962d0cfbdffb9454a14e6cd092bb1 not found: ID does not exist" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.771312 4787 scope.go:117] "RemoveContainer" containerID="addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.772030 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c\": container with ID starting with addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c not found: ID does not exist" containerID="addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.772089 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c"} err="failed to get container status \"addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c\": rpc error: code = NotFound desc = could not find container \"addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c\": container with ID starting with addc82df6a346b5cfac9e5558ad66867f6942b1056e5c4fb273645780faa3d4c not found: ID does not exist" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.772125 4787 scope.go:117] "RemoveContainer" containerID="213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.772546 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89\": container with ID starting with 213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89 not found: ID does not exist" containerID="213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.772576 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89"} err="failed to get container status \"213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89\": rpc error: code = NotFound desc = could not find container \"213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89\": container with ID starting with 213d19ae7eb67794f3a4ccb54f6fa266a10e7dbc8f63ec8bc7d7d73b236d4d89 not found: ID does not exist" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.772595 4787 scope.go:117] "RemoveContainer" containerID="86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.772908 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375\": container with ID starting with 86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375 not found: ID does not exist" containerID="86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.772936 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375"} err="failed to get container status \"86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375\": rpc error: code = NotFound desc = could not find container \"86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375\": container with ID starting with 86c44a37d43fdd9716c33a12e580fb3c78525ccf4699ce2449f6f8a612b41375 not found: ID does not exist" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.826512 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.842361 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862080 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.862583 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="proxy-httpd" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862600 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="proxy-httpd" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.862665 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-notification-agent" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862672 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-notification-agent" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.862700 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="sg-core" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862707 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="sg-core" Feb 19 19:43:35 crc kubenswrapper[4787]: E0219 19:43:35.862719 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-central-agent" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862727 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-central-agent" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862929 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="sg-core" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862952 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-notification-agent" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862971 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="proxy-httpd" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.862993 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" containerName="ceilometer-central-agent" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.864989 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.867640 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.868054 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.879910 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.958793 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-scripts\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.958853 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.959089 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvppb\" (UniqueName: \"kubernetes.io/projected/9d018105-8445-48e5-b826-3991f7fa306f-kube-api-access-wvppb\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.959165 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-log-httpd\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.959466 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-config-data\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.959661 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:35 crc kubenswrapper[4787]: I0219 19:43:35.959781 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-run-httpd\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062070 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-scripts\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062137 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvppb\" (UniqueName: \"kubernetes.io/projected/9d018105-8445-48e5-b826-3991f7fa306f-kube-api-access-wvppb\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062266 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-log-httpd\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-config-data\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062831 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062881 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-run-httpd\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.062875 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-log-httpd\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.063348 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-run-httpd\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.066930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.067277 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-scripts\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.067512 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-config-data\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.069469 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.086300 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvppb\" (UniqueName: \"kubernetes.io/projected/9d018105-8445-48e5-b826-3991f7fa306f-kube-api-access-wvppb\") pod \"ceilometer-0\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " pod="openstack/ceilometer-0" Feb 19 19:43:36 crc kubenswrapper[4787]: I0219 19:43:36.186385 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:43:37 crc kubenswrapper[4787]: I0219 19:43:36.763906 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:43:37 crc kubenswrapper[4787]: I0219 19:43:36.911731 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c" path="/var/lib/kubelet/pods/125c29ff-e2e2-4f54-aa8e-a96f2c5bfa0c/volumes" Feb 19 19:43:37 crc kubenswrapper[4787]: I0219 19:43:37.524199 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerStarted","Data":"e6d6ca3e9e0217d4d565de753879d5e4c7b7a89464774351c41c7674c4768709"} Feb 19 19:43:37 crc kubenswrapper[4787]: I0219 19:43:37.524537 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerStarted","Data":"3cba8e8a7177d9c5047287bb556e4cb2db441153f92f7a652fba1bb380b45243"} Feb 19 19:43:37 crc kubenswrapper[4787]: I0219 19:43:37.938680 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 19:43:37 crc kubenswrapper[4787]: I0219 19:43:37.939049 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.115366 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.115420 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.272306 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.274201 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.274358 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.287358 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.315185 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.376370 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-glqtc"] Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.376629 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" containerName="dnsmasq-dns" containerID="cri-o://456eb9b5f4b2db714f24d6dfecbcb833f45d387d35b609831910f177ac110e57" gracePeriod=10 Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.562260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerStarted","Data":"04412ad00499dfd222230358fb200963a2232e86f5776bda1874a1010f6b039c"} Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.574708 4787 generic.go:334] "Generic (PLEG): container finished" podID="afc5e6eb-f81c-447a-a720-4304943b3451" containerID="456eb9b5f4b2db714f24d6dfecbcb833f45d387d35b609831910f177ac110e57" exitCode=0 Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.575759 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" event={"ID":"afc5e6eb-f81c-447a-a720-4304943b3451","Type":"ContainerDied","Data":"456eb9b5f4b2db714f24d6dfecbcb833f45d387d35b609831910f177ac110e57"} Feb 19 19:43:38 crc kubenswrapper[4787]: I0219 19:43:38.618551 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.049085 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.143213 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6klj\" (UniqueName: \"kubernetes.io/projected/afc5e6eb-f81c-447a-a720-4304943b3451-kube-api-access-g6klj\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.143327 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-sb\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.143378 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-swift-storage-0\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.143469 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-config\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.143546 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.143573 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-svc\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.165832 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc5e6eb-f81c-447a-a720-4304943b3451-kube-api-access-g6klj" (OuterVolumeSpecName: "kube-api-access-g6klj") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "kube-api-access-g6klj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.199853 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.233:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.200100 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.233:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.246961 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6klj\" (UniqueName: \"kubernetes.io/projected/afc5e6eb-f81c-447a-a720-4304943b3451-kube-api-access-g6klj\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.263791 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.263849 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.263973 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.264812 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.264875 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" gracePeriod=600 Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.351308 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.351901 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb\") pod \"afc5e6eb-f81c-447a-a720-4304943b3451\" (UID: \"afc5e6eb-f81c-447a-a720-4304943b3451\") " Feb 19 19:43:39 crc kubenswrapper[4787]: W0219 19:43:39.352065 4787 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/afc5e6eb-f81c-447a-a720-4304943b3451/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.352079 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.352579 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.358114 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-config" (OuterVolumeSpecName: "config") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.363313 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.368836 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: E0219 19:43:39.397823 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.454238 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.454269 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.454278 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.455123 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "afc5e6eb-f81c-447a-a720-4304943b3451" (UID: "afc5e6eb-f81c-447a-a720-4304943b3451"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.556819 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afc5e6eb-f81c-447a-a720-4304943b3451-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.585765 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" exitCode=0 Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.585826 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc"} Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.585899 4787 scope.go:117] "RemoveContainer" containerID="4c290f7666b81201ba0242964eb17cef06ef0c6b6b9b4a97e80ee9c3f5daac23" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.586683 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:43:39 crc kubenswrapper[4787]: E0219 19:43:39.587066 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.593597 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerStarted","Data":"ac541ba3bc5338343e46ad89c316565e94104b8d9e3b33b4860537c134bbea43"} Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.607619 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" event={"ID":"afc5e6eb-f81c-447a-a720-4304943b3451","Type":"ContainerDied","Data":"606f35be3d0690365ecf4f54391518a077f7170646e0708086dc356c0213d299"} Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.607702 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-glqtc" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.685294 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-glqtc"] Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.697378 4787 scope.go:117] "RemoveContainer" containerID="456eb9b5f4b2db714f24d6dfecbcb833f45d387d35b609831910f177ac110e57" Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.703246 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-glqtc"] Feb 19 19:43:39 crc kubenswrapper[4787]: I0219 19:43:39.729140 4787 scope.go:117] "RemoveContainer" containerID="817864a48f9313752448527f76e9bd5c36e80083233c2c77a85a09456880554f" Feb 19 19:43:40 crc kubenswrapper[4787]: I0219 19:43:40.623708 4787 generic.go:334] "Generic (PLEG): container finished" podID="a96d0657-656e-4614-bac3-490e595478dd" containerID="63e9085b9f6f51a0a6f2bbc5ae47da7024e774b61a692b2ea62658123466dbb7" exitCode=0 Feb 19 19:43:40 crc kubenswrapper[4787]: I0219 19:43:40.623795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" event={"ID":"a96d0657-656e-4614-bac3-490e595478dd","Type":"ContainerDied","Data":"63e9085b9f6f51a0a6f2bbc5ae47da7024e774b61a692b2ea62658123466dbb7"} Feb 19 19:43:40 crc kubenswrapper[4787]: I0219 19:43:40.626521 4787 generic.go:334] "Generic (PLEG): container finished" podID="0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" containerID="6f15087f35d3cba1ad843fbf5e9b04ff128b0e2b507153ee1fb1b0c2ecdabc2c" exitCode=0 Feb 19 19:43:40 crc kubenswrapper[4787]: I0219 19:43:40.626554 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn2fb" event={"ID":"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0","Type":"ContainerDied","Data":"6f15087f35d3cba1ad843fbf5e9b04ff128b0e2b507153ee1fb1b0c2ecdabc2c"} Feb 19 19:43:40 crc kubenswrapper[4787]: I0219 19:43:40.905101 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" path="/var/lib/kubelet/pods/afc5e6eb-f81c-447a-a720-4304943b3451/volumes" Feb 19 19:43:41 crc kubenswrapper[4787]: I0219 19:43:41.640723 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerStarted","Data":"d46fbb6e3a3e5e6d7d1b6b070caae84b6a06f05f5b4a10d4664b0b799026d205"} Feb 19 19:43:41 crc kubenswrapper[4787]: I0219 19:43:41.665825 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.432214821 podStartE2EDuration="6.665806979s" podCreationTimestamp="2026-02-19 19:43:35 +0000 UTC" firstStartedPulling="2026-02-19 19:43:36.768480319 +0000 UTC m=+1484.559146261" lastFinishedPulling="2026-02-19 19:43:41.002072477 +0000 UTC m=+1488.792738419" observedRunningTime="2026-02-19 19:43:41.658726658 +0000 UTC m=+1489.449392610" watchObservedRunningTime="2026-02-19 19:43:41.665806979 +0000 UTC m=+1489.456472921" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.236289 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.244203 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.354544 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-config-data\") pod \"a96d0657-656e-4614-bac3-490e595478dd\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.354713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-config-data\") pod \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.354773 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-scripts\") pod \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.354847 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjfwh\" (UniqueName: \"kubernetes.io/projected/a96d0657-656e-4614-bac3-490e595478dd-kube-api-access-tjfwh\") pod \"a96d0657-656e-4614-bac3-490e595478dd\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.354887 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-scripts\") pod \"a96d0657-656e-4614-bac3-490e595478dd\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.354940 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-combined-ca-bundle\") pod \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.355017 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hcqc\" (UniqueName: \"kubernetes.io/projected/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-kube-api-access-8hcqc\") pod \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\" (UID: \"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.355092 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-combined-ca-bundle\") pod \"a96d0657-656e-4614-bac3-490e595478dd\" (UID: \"a96d0657-656e-4614-bac3-490e595478dd\") " Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.360775 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-scripts" (OuterVolumeSpecName: "scripts") pod "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" (UID: "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.361072 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a96d0657-656e-4614-bac3-490e595478dd-kube-api-access-tjfwh" (OuterVolumeSpecName: "kube-api-access-tjfwh") pod "a96d0657-656e-4614-bac3-490e595478dd" (UID: "a96d0657-656e-4614-bac3-490e595478dd"). InnerVolumeSpecName "kube-api-access-tjfwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.370396 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-scripts" (OuterVolumeSpecName: "scripts") pod "a96d0657-656e-4614-bac3-490e595478dd" (UID: "a96d0657-656e-4614-bac3-490e595478dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.375894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-kube-api-access-8hcqc" (OuterVolumeSpecName: "kube-api-access-8hcqc") pod "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" (UID: "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0"). InnerVolumeSpecName "kube-api-access-8hcqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.389286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-config-data" (OuterVolumeSpecName: "config-data") pod "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" (UID: "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.390477 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" (UID: "0d414ea7-bf01-41ad-9d7e-ed31676ae9e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.392249 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-config-data" (OuterVolumeSpecName: "config-data") pod "a96d0657-656e-4614-bac3-490e595478dd" (UID: "a96d0657-656e-4614-bac3-490e595478dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.408333 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a96d0657-656e-4614-bac3-490e595478dd" (UID: "a96d0657-656e-4614-bac3-490e595478dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458155 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458469 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458479 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458487 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458497 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjfwh\" (UniqueName: \"kubernetes.io/projected/a96d0657-656e-4614-bac3-490e595478dd-kube-api-access-tjfwh\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458508 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a96d0657-656e-4614-bac3-490e595478dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458516 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.458524 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hcqc\" (UniqueName: \"kubernetes.io/projected/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0-kube-api-access-8hcqc\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.660542 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kn2fb" event={"ID":"0d414ea7-bf01-41ad-9d7e-ed31676ae9e0","Type":"ContainerDied","Data":"2f5adedfcb5b3fd2a28f19cccd59ea1d558370291267709efbbc94210ac6cdae"} Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.660591 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5adedfcb5b3fd2a28f19cccd59ea1d558370291267709efbbc94210ac6cdae" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.660702 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kn2fb" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.664145 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" event={"ID":"a96d0657-656e-4614-bac3-490e595478dd","Type":"ContainerDied","Data":"07527dc238bf3d2297c9f28336e7d6d0902e940cc67f5ed5cf32b3248f16030d"} Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.664187 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2hq2k" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.664205 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07527dc238bf3d2297c9f28336e7d6d0902e940cc67f5ed5cf32b3248f16030d" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.664662 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.742141 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 19:43:42 crc kubenswrapper[4787]: E0219 19:43:42.742763 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" containerName="init" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.742786 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" containerName="init" Feb 19 19:43:42 crc kubenswrapper[4787]: E0219 19:43:42.742812 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" containerName="dnsmasq-dns" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.742824 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" containerName="dnsmasq-dns" Feb 19 19:43:42 crc kubenswrapper[4787]: E0219 19:43:42.742857 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a96d0657-656e-4614-bac3-490e595478dd" containerName="nova-cell1-conductor-db-sync" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.742865 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a96d0657-656e-4614-bac3-490e595478dd" containerName="nova-cell1-conductor-db-sync" Feb 19 19:43:42 crc kubenswrapper[4787]: E0219 19:43:42.742899 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" containerName="nova-manage" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.742908 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" containerName="nova-manage" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.743191 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" containerName="nova-manage" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.743228 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a96d0657-656e-4614-bac3-490e595478dd" containerName="nova-cell1-conductor-db-sync" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.743244 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc5e6eb-f81c-447a-a720-4304943b3451" containerName="dnsmasq-dns" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.744282 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.752530 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.783183 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.842349 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.842571 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-log" containerID="cri-o://d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721" gracePeriod=30 Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.842750 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-api" containerID="cri-o://e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478" gracePeriod=30 Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.871542 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.871844 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" containerName="nova-scheduler-scheduler" containerID="cri-o://b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" gracePeriod=30 Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.883551 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmmgh\" (UniqueName: \"kubernetes.io/projected/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-kube-api-access-jmmgh\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.883726 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.883810 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.986237 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.986330 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmmgh\" (UniqueName: \"kubernetes.io/projected/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-kube-api-access-jmmgh\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.986432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.993357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:42 crc kubenswrapper[4787]: I0219 19:43:42.993545 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:43 crc kubenswrapper[4787]: I0219 19:43:43.011151 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmmgh\" (UniqueName: \"kubernetes.io/projected/7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b-kube-api-access-jmmgh\") pod \"nova-cell1-conductor-0\" (UID: \"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b\") " pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:43 crc kubenswrapper[4787]: I0219 19:43:43.125259 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:43 crc kubenswrapper[4787]: E0219 19:43:43.269785 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 19:43:43 crc kubenswrapper[4787]: E0219 19:43:43.279599 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 19:43:43 crc kubenswrapper[4787]: E0219 19:43:43.284605 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 19:43:43 crc kubenswrapper[4787]: E0219 19:43:43.284675 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" containerName="nova-scheduler-scheduler" Feb 19 19:43:43 crc kubenswrapper[4787]: I0219 19:43:43.627025 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 19:43:43 crc kubenswrapper[4787]: I0219 19:43:43.698843 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b","Type":"ContainerStarted","Data":"e1abbd8286dd07d148d9be61fe2dc70591625bd103fdd32672a8f720eff396f0"} Feb 19 19:43:43 crc kubenswrapper[4787]: I0219 19:43:43.709827 4787 generic.go:334] "Generic (PLEG): container finished" podID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerID="d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721" exitCode=143 Feb 19 19:43:43 crc kubenswrapper[4787]: I0219 19:43:43.710661 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e614e88b-cc22-4e79-9dd5-fe0fe47e4372","Type":"ContainerDied","Data":"d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721"} Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.562878 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-vmllq"] Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.565047 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.595077 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-7101-account-create-update-hn7v5"] Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.596647 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.601158 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.621367 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vmllq"] Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.647286 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7101-account-create-update-hn7v5"] Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.726346 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ws9\" (UniqueName: \"kubernetes.io/projected/5d3b03a5-b16c-4c89-8196-3d26a077661a-kube-api-access-l6ws9\") pod \"aodh-7101-account-create-update-hn7v5\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.726427 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3b03a5-b16c-4c89-8196-3d26a077661a-operator-scripts\") pod \"aodh-7101-account-create-update-hn7v5\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.726596 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bd62de-d209-432d-acdc-af4d1d3d7392-operator-scripts\") pod \"aodh-db-create-vmllq\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.726907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrgg\" (UniqueName: \"kubernetes.io/projected/54bd62de-d209-432d-acdc-af4d1d3d7392-kube-api-access-mfrgg\") pod \"aodh-db-create-vmllq\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.741817 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b","Type":"ContainerStarted","Data":"4a8126b9194d607366729d439bebdd6bf50a0a56cd4887b5791622be3ac6a1c2"} Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.743111 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.793349 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.793323213 podStartE2EDuration="2.793323213s" podCreationTimestamp="2026-02-19 19:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:44.77638838 +0000 UTC m=+1492.567054322" watchObservedRunningTime="2026-02-19 19:43:44.793323213 +0000 UTC m=+1492.583989155" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.828867 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bd62de-d209-432d-acdc-af4d1d3d7392-operator-scripts\") pod \"aodh-db-create-vmllq\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.829013 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrgg\" (UniqueName: \"kubernetes.io/projected/54bd62de-d209-432d-acdc-af4d1d3d7392-kube-api-access-mfrgg\") pod \"aodh-db-create-vmllq\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.829087 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6ws9\" (UniqueName: \"kubernetes.io/projected/5d3b03a5-b16c-4c89-8196-3d26a077661a-kube-api-access-l6ws9\") pod \"aodh-7101-account-create-update-hn7v5\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.829121 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3b03a5-b16c-4c89-8196-3d26a077661a-operator-scripts\") pod \"aodh-7101-account-create-update-hn7v5\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.830049 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3b03a5-b16c-4c89-8196-3d26a077661a-operator-scripts\") pod \"aodh-7101-account-create-update-hn7v5\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.830383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bd62de-d209-432d-acdc-af4d1d3d7392-operator-scripts\") pod \"aodh-db-create-vmllq\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.856557 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6ws9\" (UniqueName: \"kubernetes.io/projected/5d3b03a5-b16c-4c89-8196-3d26a077661a-kube-api-access-l6ws9\") pod \"aodh-7101-account-create-update-hn7v5\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.863970 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrgg\" (UniqueName: \"kubernetes.io/projected/54bd62de-d209-432d-acdc-af4d1d3d7392-kube-api-access-mfrgg\") pod \"aodh-db-create-vmllq\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.895946 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:44 crc kubenswrapper[4787]: I0219 19:43:44.920265 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:45 crc kubenswrapper[4787]: W0219 19:43:45.469247 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d3b03a5_b16c_4c89_8196_3d26a077661a.slice/crio-f42d6dca849f8e58c2360d56e5dc5a4f9b38e6d6a5466d65a702997d83d50e3c WatchSource:0}: Error finding container f42d6dca849f8e58c2360d56e5dc5a4f9b38e6d6a5466d65a702997d83d50e3c: Status 404 returned error can't find the container with id f42d6dca849f8e58c2360d56e5dc5a4f9b38e6d6a5466d65a702997d83d50e3c Feb 19 19:43:45 crc kubenswrapper[4787]: I0219 19:43:45.479029 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7101-account-create-update-hn7v5"] Feb 19 19:43:45 crc kubenswrapper[4787]: I0219 19:43:45.573124 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vmllq"] Feb 19 19:43:45 crc kubenswrapper[4787]: W0219 19:43:45.589591 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54bd62de_d209_432d_acdc_af4d1d3d7392.slice/crio-68a42dda5e9e547f9c6d43e2e106dc76c71f2809b10ad8c6b075c31387bd90fa WatchSource:0}: Error finding container 68a42dda5e9e547f9c6d43e2e106dc76c71f2809b10ad8c6b075c31387bd90fa: Status 404 returned error can't find the container with id 68a42dda5e9e547f9c6d43e2e106dc76c71f2809b10ad8c6b075c31387bd90fa Feb 19 19:43:45 crc kubenswrapper[4787]: I0219 19:43:45.754980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7101-account-create-update-hn7v5" event={"ID":"5d3b03a5-b16c-4c89-8196-3d26a077661a","Type":"ContainerStarted","Data":"f42d6dca849f8e58c2360d56e5dc5a4f9b38e6d6a5466d65a702997d83d50e3c"} Feb 19 19:43:45 crc kubenswrapper[4787]: I0219 19:43:45.757900 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vmllq" event={"ID":"54bd62de-d209-432d-acdc-af4d1d3d7392","Type":"ContainerStarted","Data":"68a42dda5e9e547f9c6d43e2e106dc76c71f2809b10ad8c6b075c31387bd90fa"} Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.525769 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.575127 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-combined-ca-bundle\") pod \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.575292 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-config-data\") pod \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.575344 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-logs\") pod \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.575552 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nstkq\" (UniqueName: \"kubernetes.io/projected/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-kube-api-access-nstkq\") pod \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\" (UID: \"e614e88b-cc22-4e79-9dd5-fe0fe47e4372\") " Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.577936 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-logs" (OuterVolumeSpecName: "logs") pod "e614e88b-cc22-4e79-9dd5-fe0fe47e4372" (UID: "e614e88b-cc22-4e79-9dd5-fe0fe47e4372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.592207 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-kube-api-access-nstkq" (OuterVolumeSpecName: "kube-api-access-nstkq") pod "e614e88b-cc22-4e79-9dd5-fe0fe47e4372" (UID: "e614e88b-cc22-4e79-9dd5-fe0fe47e4372"). InnerVolumeSpecName "kube-api-access-nstkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.619553 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-config-data" (OuterVolumeSpecName: "config-data") pod "e614e88b-cc22-4e79-9dd5-fe0fe47e4372" (UID: "e614e88b-cc22-4e79-9dd5-fe0fe47e4372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.642817 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e614e88b-cc22-4e79-9dd5-fe0fe47e4372" (UID: "e614e88b-cc22-4e79-9dd5-fe0fe47e4372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.678452 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nstkq\" (UniqueName: \"kubernetes.io/projected/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-kube-api-access-nstkq\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.678484 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.678493 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.678503 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e614e88b-cc22-4e79-9dd5-fe0fe47e4372-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.773387 4787 generic.go:334] "Generic (PLEG): container finished" podID="54bd62de-d209-432d-acdc-af4d1d3d7392" containerID="6a7578e70d4167684c52b75db29a2c06804ea8f9400529f5d880add104298db6" exitCode=0 Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.773989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vmllq" event={"ID":"54bd62de-d209-432d-acdc-af4d1d3d7392","Type":"ContainerDied","Data":"6a7578e70d4167684c52b75db29a2c06804ea8f9400529f5d880add104298db6"} Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.777118 4787 generic.go:334] "Generic (PLEG): container finished" podID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerID="e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478" exitCode=0 Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.777168 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e614e88b-cc22-4e79-9dd5-fe0fe47e4372","Type":"ContainerDied","Data":"e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478"} Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.777181 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.777204 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e614e88b-cc22-4e79-9dd5-fe0fe47e4372","Type":"ContainerDied","Data":"14d40a2a1d8aaa5c8ef9599f1d1c43755b86e01e3bf8fff6a7d73a10e58d6612"} Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.777254 4787 scope.go:117] "RemoveContainer" containerID="e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.783970 4787 generic.go:334] "Generic (PLEG): container finished" podID="5d3b03a5-b16c-4c89-8196-3d26a077661a" containerID="4ae175dcb72e5ee3baa1779c935e40a059c278217348c663f599cfb1e58eff37" exitCode=0 Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.784048 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7101-account-create-update-hn7v5" event={"ID":"5d3b03a5-b16c-4c89-8196-3d26a077661a","Type":"ContainerDied","Data":"4ae175dcb72e5ee3baa1779c935e40a059c278217348c663f599cfb1e58eff37"} Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.865234 4787 scope.go:117] "RemoveContainer" containerID="d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.881091 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.897725 4787 scope.go:117] "RemoveContainer" containerID="e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478" Feb 19 19:43:46 crc kubenswrapper[4787]: E0219 19:43:46.899366 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478\": container with ID starting with e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478 not found: ID does not exist" containerID="e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.899504 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478"} err="failed to get container status \"e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478\": rpc error: code = NotFound desc = could not find container \"e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478\": container with ID starting with e07eb0116b7325505d2f503a0e0bb57c3a48c5804b88d158e4589bcda0683478 not found: ID does not exist" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.899580 4787 scope.go:117] "RemoveContainer" containerID="d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721" Feb 19 19:43:46 crc kubenswrapper[4787]: E0219 19:43:46.916431 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721\": container with ID starting with d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721 not found: ID does not exist" containerID="d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.916487 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721"} err="failed to get container status \"d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721\": rpc error: code = NotFound desc = could not find container \"d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721\": container with ID starting with d32d8766831d3417338a9a3eb804aa39615037a632507c3503e174dd1db21721 not found: ID does not exist" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.918395 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.921989 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:46 crc kubenswrapper[4787]: E0219 19:43:46.922632 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-log" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.922658 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-log" Feb 19 19:43:46 crc kubenswrapper[4787]: E0219 19:43:46.922684 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-api" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.922692 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-api" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.922946 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-api" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.922981 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" containerName="nova-api-log" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.924389 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.926499 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.937091 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.992690 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcae43c8-8b12-49c9-9fc1-a386f0b52631-logs\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.992981 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-config-data\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.993576 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:46 crc kubenswrapper[4787]: I0219 19:43:46.993654 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4c5\" (UniqueName: \"kubernetes.io/projected/bcae43c8-8b12-49c9-9fc1-a386f0b52631-kube-api-access-2n4c5\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.095935 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-config-data\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.096137 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.096156 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n4c5\" (UniqueName: \"kubernetes.io/projected/bcae43c8-8b12-49c9-9fc1-a386f0b52631-kube-api-access-2n4c5\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.096231 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcae43c8-8b12-49c9-9fc1-a386f0b52631-logs\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.096658 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcae43c8-8b12-49c9-9fc1-a386f0b52631-logs\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.101716 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.101767 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-config-data\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.118509 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n4c5\" (UniqueName: \"kubernetes.io/projected/bcae43c8-8b12-49c9-9fc1-a386f0b52631-kube-api-access-2n4c5\") pod \"nova-api-0\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.205085 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.264156 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.299943 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-config-data\") pod \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.300101 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-combined-ca-bundle\") pod \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.300287 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njlb8\" (UniqueName: \"kubernetes.io/projected/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-kube-api-access-njlb8\") pod \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\" (UID: \"bfd951d7-0a54-44d0-88e4-6a9d22bf2589\") " Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.303740 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-kube-api-access-njlb8" (OuterVolumeSpecName: "kube-api-access-njlb8") pod "bfd951d7-0a54-44d0-88e4-6a9d22bf2589" (UID: "bfd951d7-0a54-44d0-88e4-6a9d22bf2589"). InnerVolumeSpecName "kube-api-access-njlb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.333052 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfd951d7-0a54-44d0-88e4-6a9d22bf2589" (UID: "bfd951d7-0a54-44d0-88e4-6a9d22bf2589"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.343071 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-config-data" (OuterVolumeSpecName: "config-data") pod "bfd951d7-0a54-44d0-88e4-6a9d22bf2589" (UID: "bfd951d7-0a54-44d0-88e4-6a9d22bf2589"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.410791 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.410819 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njlb8\" (UniqueName: \"kubernetes.io/projected/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-kube-api-access-njlb8\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.410830 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd951d7-0a54-44d0-88e4-6a9d22bf2589-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.798481 4787 generic.go:334] "Generic (PLEG): container finished" podID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" exitCode=0 Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.798813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfd951d7-0a54-44d0-88e4-6a9d22bf2589","Type":"ContainerDied","Data":"b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468"} Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.798861 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bfd951d7-0a54-44d0-88e4-6a9d22bf2589","Type":"ContainerDied","Data":"d8750c350071efc4db443b18603abe4aee83c6268a21ace66fb36851bcbf2861"} Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.798887 4787 scope.go:117] "RemoveContainer" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.798991 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.811929 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.841598 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.854478 4787 scope.go:117] "RemoveContainer" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.856208 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:47 crc kubenswrapper[4787]: E0219 19:43:47.864130 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468\": container with ID starting with b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468 not found: ID does not exist" containerID="b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.864186 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468"} err="failed to get container status \"b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468\": rpc error: code = NotFound desc = could not find container \"b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468\": container with ID starting with b1776b886b1e20d71f5d9374179cdd21a5b87e3fcd8b83b3caa71718879ee468 not found: ID does not exist" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.872490 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:47 crc kubenswrapper[4787]: E0219 19:43:47.873007 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" containerName="nova-scheduler-scheduler" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.873021 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" containerName="nova-scheduler-scheduler" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.873225 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" containerName="nova-scheduler-scheduler" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.874014 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.876269 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 19:43:47 crc kubenswrapper[4787]: W0219 19:43:47.880428 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcae43c8_8b12_49c9_9fc1_a386f0b52631.slice/crio-b8507d500a131e0fa5554c3cb4ea8052af6ba95ee2cf567839fa15ee205b9422 WatchSource:0}: Error finding container b8507d500a131e0fa5554c3cb4ea8052af6ba95ee2cf567839fa15ee205b9422: Status 404 returned error can't find the container with id b8507d500a131e0fa5554c3cb4ea8052af6ba95ee2cf567839fa15ee205b9422 Feb 19 19:43:47 crc kubenswrapper[4787]: I0219 19:43:47.887815 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.038142 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lwr\" (UniqueName: \"kubernetes.io/projected/9d5b581a-832d-4fa1-8a84-29f54d14752b-kube-api-access-q8lwr\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.038622 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.038830 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-config-data\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.143192 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.143393 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-config-data\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.143527 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lwr\" (UniqueName: \"kubernetes.io/projected/9d5b581a-832d-4fa1-8a84-29f54d14752b-kube-api-access-q8lwr\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.163103 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.175014 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-config-data\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.199267 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lwr\" (UniqueName: \"kubernetes.io/projected/9d5b581a-832d-4fa1-8a84-29f54d14752b-kube-api-access-q8lwr\") pod \"nova-scheduler-0\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.234073 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.307360 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.414383 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.574761 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bd62de-d209-432d-acdc-af4d1d3d7392-operator-scripts\") pod \"54bd62de-d209-432d-acdc-af4d1d3d7392\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.575403 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfrgg\" (UniqueName: \"kubernetes.io/projected/54bd62de-d209-432d-acdc-af4d1d3d7392-kube-api-access-mfrgg\") pod \"54bd62de-d209-432d-acdc-af4d1d3d7392\" (UID: \"54bd62de-d209-432d-acdc-af4d1d3d7392\") " Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.575738 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bd62de-d209-432d-acdc-af4d1d3d7392-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54bd62de-d209-432d-acdc-af4d1d3d7392" (UID: "54bd62de-d209-432d-acdc-af4d1d3d7392"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.576469 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54bd62de-d209-432d-acdc-af4d1d3d7392-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.582320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bd62de-d209-432d-acdc-af4d1d3d7392-kube-api-access-mfrgg" (OuterVolumeSpecName: "kube-api-access-mfrgg") pod "54bd62de-d209-432d-acdc-af4d1d3d7392" (UID: "54bd62de-d209-432d-acdc-af4d1d3d7392"). InnerVolumeSpecName "kube-api-access-mfrgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.585256 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.678275 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3b03a5-b16c-4c89-8196-3d26a077661a-operator-scripts\") pod \"5d3b03a5-b16c-4c89-8196-3d26a077661a\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.678378 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6ws9\" (UniqueName: \"kubernetes.io/projected/5d3b03a5-b16c-4c89-8196-3d26a077661a-kube-api-access-l6ws9\") pod \"5d3b03a5-b16c-4c89-8196-3d26a077661a\" (UID: \"5d3b03a5-b16c-4c89-8196-3d26a077661a\") " Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.679324 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfrgg\" (UniqueName: \"kubernetes.io/projected/54bd62de-d209-432d-acdc-af4d1d3d7392-kube-api-access-mfrgg\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.680163 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d3b03a5-b16c-4c89-8196-3d26a077661a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d3b03a5-b16c-4c89-8196-3d26a077661a" (UID: "5d3b03a5-b16c-4c89-8196-3d26a077661a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.684855 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d3b03a5-b16c-4c89-8196-3d26a077661a-kube-api-access-l6ws9" (OuterVolumeSpecName: "kube-api-access-l6ws9") pod "5d3b03a5-b16c-4c89-8196-3d26a077661a" (UID: "5d3b03a5-b16c-4c89-8196-3d26a077661a"). InnerVolumeSpecName "kube-api-access-l6ws9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.787119 4787 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d3b03a5-b16c-4c89-8196-3d26a077661a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.787148 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6ws9\" (UniqueName: \"kubernetes.io/projected/5d3b03a5-b16c-4c89-8196-3d26a077661a-kube-api-access-l6ws9\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.810270 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7101-account-create-update-hn7v5" event={"ID":"5d3b03a5-b16c-4c89-8196-3d26a077661a","Type":"ContainerDied","Data":"f42d6dca849f8e58c2360d56e5dc5a4f9b38e6d6a5466d65a702997d83d50e3c"} Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.810312 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42d6dca849f8e58c2360d56e5dc5a4f9b38e6d6a5466d65a702997d83d50e3c" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.810325 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7101-account-create-update-hn7v5" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.813428 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vmllq" event={"ID":"54bd62de-d209-432d-acdc-af4d1d3d7392","Type":"ContainerDied","Data":"68a42dda5e9e547f9c6d43e2e106dc76c71f2809b10ad8c6b075c31387bd90fa"} Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.813469 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vmllq" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.813481 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a42dda5e9e547f9c6d43e2e106dc76c71f2809b10ad8c6b075c31387bd90fa" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.822335 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bcae43c8-8b12-49c9-9fc1-a386f0b52631","Type":"ContainerStarted","Data":"db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21"} Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.822377 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bcae43c8-8b12-49c9-9fc1-a386f0b52631","Type":"ContainerStarted","Data":"b8507d500a131e0fa5554c3cb4ea8052af6ba95ee2cf567839fa15ee205b9422"} Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.903886 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd951d7-0a54-44d0-88e4-6a9d22bf2589" path="/var/lib/kubelet/pods/bfd951d7-0a54-44d0-88e4-6a9d22bf2589/volumes" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.904570 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e614e88b-cc22-4e79-9dd5-fe0fe47e4372" path="/var/lib/kubelet/pods/e614e88b-cc22-4e79-9dd5-fe0fe47e4372/volumes" Feb 19 19:43:48 crc kubenswrapper[4787]: I0219 19:43:48.958046 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:43:49 crc kubenswrapper[4787]: I0219 19:43:49.833192 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d5b581a-832d-4fa1-8a84-29f54d14752b","Type":"ContainerStarted","Data":"38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b"} Feb 19 19:43:49 crc kubenswrapper[4787]: I0219 19:43:49.833651 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d5b581a-832d-4fa1-8a84-29f54d14752b","Type":"ContainerStarted","Data":"ba8264547a0bf0512d8c010301e9d1929891ac5c15e06295597c757c55696243"} Feb 19 19:43:49 crc kubenswrapper[4787]: I0219 19:43:49.835755 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bcae43c8-8b12-49c9-9fc1-a386f0b52631","Type":"ContainerStarted","Data":"f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258"} Feb 19 19:43:49 crc kubenswrapper[4787]: I0219 19:43:49.858767 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8587497539999998 podStartE2EDuration="2.858749754s" podCreationTimestamp="2026-02-19 19:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:49.847545326 +0000 UTC m=+1497.638211268" watchObservedRunningTime="2026-02-19 19:43:49.858749754 +0000 UTC m=+1497.649415696" Feb 19 19:43:49 crc kubenswrapper[4787]: I0219 19:43:49.882881 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.882863389 podStartE2EDuration="3.882863389s" podCreationTimestamp="2026-02-19 19:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:43:49.879931006 +0000 UTC m=+1497.670596948" watchObservedRunningTime="2026-02-19 19:43:49.882863389 +0000 UTC m=+1497.673529331" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.025945 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-h476n"] Feb 19 19:43:50 crc kubenswrapper[4787]: E0219 19:43:50.026464 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bd62de-d209-432d-acdc-af4d1d3d7392" containerName="mariadb-database-create" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.026483 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bd62de-d209-432d-acdc-af4d1d3d7392" containerName="mariadb-database-create" Feb 19 19:43:50 crc kubenswrapper[4787]: E0219 19:43:50.026503 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d3b03a5-b16c-4c89-8196-3d26a077661a" containerName="mariadb-account-create-update" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.026510 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d3b03a5-b16c-4c89-8196-3d26a077661a" containerName="mariadb-account-create-update" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.026761 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d3b03a5-b16c-4c89-8196-3d26a077661a" containerName="mariadb-account-create-update" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.026784 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bd62de-d209-432d-acdc-af4d1d3d7392" containerName="mariadb-database-create" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.027571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.029339 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-f2j2m" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.030140 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.031126 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.037594 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.039891 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-h476n"] Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.117118 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-config-data\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.117192 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fhwc\" (UniqueName: \"kubernetes.io/projected/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-kube-api-access-2fhwc\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.117254 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-scripts\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.117433 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-combined-ca-bundle\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.219575 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-config-data\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.220596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fhwc\" (UniqueName: \"kubernetes.io/projected/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-kube-api-access-2fhwc\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.220744 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-scripts\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.221046 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-combined-ca-bundle\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.225544 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-combined-ca-bundle\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.228170 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-scripts\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.230599 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-config-data\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.242478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fhwc\" (UniqueName: \"kubernetes.io/projected/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-kube-api-access-2fhwc\") pod \"aodh-db-sync-h476n\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.362038 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:50 crc kubenswrapper[4787]: I0219 19:43:50.911690 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-h476n"] Feb 19 19:43:51 crc kubenswrapper[4787]: I0219 19:43:51.863721 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h476n" event={"ID":"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4","Type":"ContainerStarted","Data":"31eca58b05c15e5c80ec7446a9bca2b8e144139d265e8d945851e8788410bb45"} Feb 19 19:43:53 crc kubenswrapper[4787]: I0219 19:43:53.307945 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 19:43:54 crc kubenswrapper[4787]: I0219 19:43:54.892639 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:43:54 crc kubenswrapper[4787]: E0219 19:43:54.893443 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:43:55 crc kubenswrapper[4787]: I0219 19:43:55.231974 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:43:55 crc kubenswrapper[4787]: I0219 19:43:55.928600 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h476n" event={"ID":"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4","Type":"ContainerStarted","Data":"ee7b46f20435feaaa0a9eb42494f632259041444c1a75a21e2cfa3cdf95387ac"} Feb 19 19:43:55 crc kubenswrapper[4787]: I0219 19:43:55.955348 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-h476n" podStartSLOduration=1.6423453970000002 podStartE2EDuration="5.955330723s" podCreationTimestamp="2026-02-19 19:43:50 +0000 UTC" firstStartedPulling="2026-02-19 19:43:50.916491123 +0000 UTC m=+1498.707157065" lastFinishedPulling="2026-02-19 19:43:55.229476449 +0000 UTC m=+1503.020142391" observedRunningTime="2026-02-19 19:43:55.944879566 +0000 UTC m=+1503.735545508" watchObservedRunningTime="2026-02-19 19:43:55.955330723 +0000 UTC m=+1503.745996665" Feb 19 19:43:57 crc kubenswrapper[4787]: I0219 19:43:57.264669 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:43:57 crc kubenswrapper[4787]: I0219 19:43:57.264737 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:43:57 crc kubenswrapper[4787]: I0219 19:43:57.958871 4787 generic.go:334] "Generic (PLEG): container finished" podID="c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" containerID="ee7b46f20435feaaa0a9eb42494f632259041444c1a75a21e2cfa3cdf95387ac" exitCode=0 Feb 19 19:43:57 crc kubenswrapper[4787]: I0219 19:43:57.958979 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h476n" event={"ID":"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4","Type":"ContainerDied","Data":"ee7b46f20435feaaa0a9eb42494f632259041444c1a75a21e2cfa3cdf95387ac"} Feb 19 19:43:58 crc kubenswrapper[4787]: I0219 19:43:58.309883 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 19:43:58 crc kubenswrapper[4787]: I0219 19:43:58.344701 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 19:43:58 crc kubenswrapper[4787]: I0219 19:43:58.364015 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 19:43:58 crc kubenswrapper[4787]: I0219 19:43:58.364936 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.008601 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.445721 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.562120 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-scripts\") pod \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.562446 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-config-data\") pod \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.562742 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-combined-ca-bundle\") pod \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.562952 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fhwc\" (UniqueName: \"kubernetes.io/projected/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-kube-api-access-2fhwc\") pod \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\" (UID: \"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4\") " Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.569267 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-kube-api-access-2fhwc" (OuterVolumeSpecName: "kube-api-access-2fhwc") pod "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" (UID: "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4"). InnerVolumeSpecName "kube-api-access-2fhwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.596846 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-scripts" (OuterVolumeSpecName: "scripts") pod "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" (UID: "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.611760 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" (UID: "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.615753 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-config-data" (OuterVolumeSpecName: "config-data") pod "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" (UID: "c29b6fec-6028-4cc7-b7b8-0bf461e57fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.666167 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fhwc\" (UniqueName: \"kubernetes.io/projected/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-kube-api-access-2fhwc\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.666207 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.666217 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.666227 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.982585 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-h476n" Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.982785 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-h476n" event={"ID":"c29b6fec-6028-4cc7-b7b8-0bf461e57fe4","Type":"ContainerDied","Data":"31eca58b05c15e5c80ec7446a9bca2b8e144139d265e8d945851e8788410bb45"} Feb 19 19:43:59 crc kubenswrapper[4787]: I0219 19:43:59.983138 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31eca58b05c15e5c80ec7446a9bca2b8e144139d265e8d945851e8788410bb45" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.128550 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:00 crc kubenswrapper[4787]: E0219 19:44:00.129142 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" containerName="aodh-db-sync" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.129171 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" containerName="aodh-db-sync" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.129435 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" containerName="aodh-db-sync" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.131959 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.134853 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.189280 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-f2j2m" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.189289 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.200785 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.293396 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-scripts\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.293805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-config-data\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.293944 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.294008 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbsn\" (UniqueName: \"kubernetes.io/projected/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-kube-api-access-tbbsn\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.395348 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-scripts\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.395476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-config-data\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.395539 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.395556 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbsn\" (UniqueName: \"kubernetes.io/projected/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-kube-api-access-tbbsn\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.402449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-combined-ca-bundle\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.403035 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-scripts\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.421137 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbsn\" (UniqueName: \"kubernetes.io/projected/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-kube-api-access-tbbsn\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.422828 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-config-data\") pod \"aodh-0\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " pod="openstack/aodh-0" Feb 19 19:44:00 crc kubenswrapper[4787]: I0219 19:44:00.504497 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:44:01 crc kubenswrapper[4787]: I0219 19:44:01.048592 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:02 crc kubenswrapper[4787]: I0219 19:44:02.003252 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerStarted","Data":"9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183"} Feb 19 19:44:02 crc kubenswrapper[4787]: I0219 19:44:02.004639 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerStarted","Data":"665f493a2d92ef80c75389762293e64741375f6341ac228f2ad1b96504e82a0f"} Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.039298 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.039848 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-central-agent" containerID="cri-o://e6d6ca3e9e0217d4d565de753879d5e4c7b7a89464774351c41c7674c4768709" gracePeriod=30 Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.039990 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="proxy-httpd" containerID="cri-o://d46fbb6e3a3e5e6d7d1b6b070caae84b6a06f05f5b4a10d4664b0b799026d205" gracePeriod=30 Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.040024 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="sg-core" containerID="cri-o://ac541ba3bc5338343e46ad89c316565e94104b8d9e3b33b4860537c134bbea43" gracePeriod=30 Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.040053 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-notification-agent" containerID="cri-o://04412ad00499dfd222230358fb200963a2232e86f5776bda1874a1010f6b039c" gracePeriod=30 Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.061999 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.239:3000/\": EOF" Feb 19 19:44:03 crc kubenswrapper[4787]: I0219 19:44:03.065936 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.025897 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d018105-8445-48e5-b826-3991f7fa306f" containerID="d46fbb6e3a3e5e6d7d1b6b070caae84b6a06f05f5b4a10d4664b0b799026d205" exitCode=0 Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.026348 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d018105-8445-48e5-b826-3991f7fa306f" containerID="ac541ba3bc5338343e46ad89c316565e94104b8d9e3b33b4860537c134bbea43" exitCode=2 Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.026356 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d018105-8445-48e5-b826-3991f7fa306f" containerID="e6d6ca3e9e0217d4d565de753879d5e4c7b7a89464774351c41c7674c4768709" exitCode=0 Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.025951 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerDied","Data":"d46fbb6e3a3e5e6d7d1b6b070caae84b6a06f05f5b4a10d4664b0b799026d205"} Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.026416 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerDied","Data":"ac541ba3bc5338343e46ad89c316565e94104b8d9e3b33b4860537c134bbea43"} Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.026429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerDied","Data":"e6d6ca3e9e0217d4d565de753879d5e4c7b7a89464774351c41c7674c4768709"} Feb 19 19:44:04 crc kubenswrapper[4787]: I0219 19:44:04.028417 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerStarted","Data":"28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c"} Feb 19 19:44:04 crc kubenswrapper[4787]: E0219 19:44:04.851934 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc783ce0e_0afa_4137_985b_9a1c584070e6.slice/crio-d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc783ce0e_0afa_4137_985b_9a1c584070e6.slice/crio-conmon-d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2.scope\": RecentStats: unable to find data in memory cache]" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.089453 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.090268 4787 generic.go:334] "Generic (PLEG): container finished" podID="165db7d1-424a-47a1-919b-7d5a30145b21" containerID="12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4" exitCode=137 Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.090336 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"165db7d1-424a-47a1-919b-7d5a30145b21","Type":"ContainerDied","Data":"12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4"} Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.090358 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"165db7d1-424a-47a1-919b-7d5a30145b21","Type":"ContainerDied","Data":"cf54265441ba47d3a01d361e15bc9a1af5a3dbc650c63fd1f0fada291a43bcf2"} Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.090374 4787 scope.go:117] "RemoveContainer" containerID="12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.097995 4787 generic.go:334] "Generic (PLEG): container finished" podID="c783ce0e-0afa-4137-985b-9a1c584070e6" containerID="d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2" exitCode=137 Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.098035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c783ce0e-0afa-4137-985b-9a1c584070e6","Type":"ContainerDied","Data":"d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2"} Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.171683 4787 scope.go:117] "RemoveContainer" containerID="48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.213378 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165db7d1-424a-47a1-919b-7d5a30145b21-logs\") pod \"165db7d1-424a-47a1-919b-7d5a30145b21\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.213438 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-config-data\") pod \"165db7d1-424a-47a1-919b-7d5a30145b21\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.213472 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck85s\" (UniqueName: \"kubernetes.io/projected/165db7d1-424a-47a1-919b-7d5a30145b21-kube-api-access-ck85s\") pod \"165db7d1-424a-47a1-919b-7d5a30145b21\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.213791 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-combined-ca-bundle\") pod \"165db7d1-424a-47a1-919b-7d5a30145b21\" (UID: \"165db7d1-424a-47a1-919b-7d5a30145b21\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.214033 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/165db7d1-424a-47a1-919b-7d5a30145b21-logs" (OuterVolumeSpecName: "logs") pod "165db7d1-424a-47a1-919b-7d5a30145b21" (UID: "165db7d1-424a-47a1-919b-7d5a30145b21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.214773 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/165db7d1-424a-47a1-919b-7d5a30145b21-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.220598 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165db7d1-424a-47a1-919b-7d5a30145b21-kube-api-access-ck85s" (OuterVolumeSpecName: "kube-api-access-ck85s") pod "165db7d1-424a-47a1-919b-7d5a30145b21" (UID: "165db7d1-424a-47a1-919b-7d5a30145b21"). InnerVolumeSpecName "kube-api-access-ck85s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.269997 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-config-data" (OuterVolumeSpecName: "config-data") pod "165db7d1-424a-47a1-919b-7d5a30145b21" (UID: "165db7d1-424a-47a1-919b-7d5a30145b21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.316796 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "165db7d1-424a-47a1-919b-7d5a30145b21" (UID: "165db7d1-424a-47a1-919b-7d5a30145b21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.320201 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.320660 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/165db7d1-424a-47a1-919b-7d5a30145b21-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.320747 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck85s\" (UniqueName: \"kubernetes.io/projected/165db7d1-424a-47a1-919b-7d5a30145b21-kube-api-access-ck85s\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.349822 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.359083 4787 scope.go:117] "RemoveContainer" containerID="12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4" Feb 19 19:44:05 crc kubenswrapper[4787]: E0219 19:44:05.359742 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4\": container with ID starting with 12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4 not found: ID does not exist" containerID="12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.359858 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4"} err="failed to get container status \"12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4\": rpc error: code = NotFound desc = could not find container \"12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4\": container with ID starting with 12de12257351e5daf429ba5984aabab14d3d1e51ac61f1e5a9ba54dbb1b2f9e4 not found: ID does not exist" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.359885 4787 scope.go:117] "RemoveContainer" containerID="48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d" Feb 19 19:44:05 crc kubenswrapper[4787]: E0219 19:44:05.360551 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d\": container with ID starting with 48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d not found: ID does not exist" containerID="48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.360575 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d"} err="failed to get container status \"48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d\": rpc error: code = NotFound desc = could not find container \"48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d\": container with ID starting with 48287e7935ef78fc753a6f04810807e94978dc2adecbca17566b2d5b1c79144d not found: ID does not exist" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.524517 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-299ww\" (UniqueName: \"kubernetes.io/projected/c783ce0e-0afa-4137-985b-9a1c584070e6-kube-api-access-299ww\") pod \"c783ce0e-0afa-4137-985b-9a1c584070e6\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.524580 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-combined-ca-bundle\") pod \"c783ce0e-0afa-4137-985b-9a1c584070e6\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.524713 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-config-data\") pod \"c783ce0e-0afa-4137-985b-9a1c584070e6\" (UID: \"c783ce0e-0afa-4137-985b-9a1c584070e6\") " Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.532670 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c783ce0e-0afa-4137-985b-9a1c584070e6-kube-api-access-299ww" (OuterVolumeSpecName: "kube-api-access-299ww") pod "c783ce0e-0afa-4137-985b-9a1c584070e6" (UID: "c783ce0e-0afa-4137-985b-9a1c584070e6"). InnerVolumeSpecName "kube-api-access-299ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.561395 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c783ce0e-0afa-4137-985b-9a1c584070e6" (UID: "c783ce0e-0afa-4137-985b-9a1c584070e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.608769 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-config-data" (OuterVolumeSpecName: "config-data") pod "c783ce0e-0afa-4137-985b-9a1c584070e6" (UID: "c783ce0e-0afa-4137-985b-9a1c584070e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.627587 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-299ww\" (UniqueName: \"kubernetes.io/projected/c783ce0e-0afa-4137-985b-9a1c584070e6-kube-api-access-299ww\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.627636 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:05 crc kubenswrapper[4787]: I0219 19:44:05.627646 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c783ce0e-0afa-4137-985b-9a1c584070e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.116812 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d018105-8445-48e5-b826-3991f7fa306f" containerID="04412ad00499dfd222230358fb200963a2232e86f5776bda1874a1010f6b039c" exitCode=0 Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.116930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerDied","Data":"04412ad00499dfd222230358fb200963a2232e86f5776bda1874a1010f6b039c"} Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.118383 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.119687 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c783ce0e-0afa-4137-985b-9a1c584070e6","Type":"ContainerDied","Data":"2d7af8e5d6705c82a32ea2817c902c929e0b51294d690db2c8305b076013d460"} Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.119739 4787 scope.go:117] "RemoveContainer" containerID="d9c26715a9dca3c2e4a396db64e432ce75d755a45a0d0d34bfce4cabea06b2a2" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.119755 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.216833 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.255779 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.277796 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.296668 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.311972 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: E0219 19:44:06.312482 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-log" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.312505 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-log" Feb 19 19:44:06 crc kubenswrapper[4787]: E0219 19:44:06.312520 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-metadata" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.312529 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-metadata" Feb 19 19:44:06 crc kubenswrapper[4787]: E0219 19:44:06.312557 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c783ce0e-0afa-4137-985b-9a1c584070e6" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.312563 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c783ce0e-0afa-4137-985b-9a1c584070e6" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.312867 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-metadata" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.312901 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" containerName="nova-metadata-log" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.312916 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c783ce0e-0afa-4137-985b-9a1c584070e6" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.314874 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.317533 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.318106 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.327874 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.346729 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.354468 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.359661 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.361509 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.361975 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.369877 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.425067 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468132 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10cb30cb-3025-4950-a987-ad172341a4ab-logs\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468476 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9dt\" (UniqueName: \"kubernetes.io/projected/10cb30cb-3025-4950-a987-ad172341a4ab-kube-api-access-fw9dt\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468583 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tglbc\" (UniqueName: \"kubernetes.io/projected/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-kube-api-access-tglbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468602 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468647 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.468803 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-config-data\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570306 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvppb\" (UniqueName: \"kubernetes.io/projected/9d018105-8445-48e5-b826-3991f7fa306f-kube-api-access-wvppb\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570341 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-config-data\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570377 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-combined-ca-bundle\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570429 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-log-httpd\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570565 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-scripts\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570598 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-run-httpd\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.570694 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-sg-core-conf-yaml\") pod \"9d018105-8445-48e5-b826-3991f7fa306f\" (UID: \"9d018105-8445-48e5-b826-3991f7fa306f\") " Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571001 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571129 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-config-data\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571239 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10cb30cb-3025-4950-a987-ad172341a4ab-logs\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571271 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571295 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571327 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9dt\" (UniqueName: \"kubernetes.io/projected/10cb30cb-3025-4950-a987-ad172341a4ab-kube-api-access-fw9dt\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571398 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571417 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tglbc\" (UniqueName: \"kubernetes.io/projected/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-kube-api-access-tglbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571443 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.571462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.579321 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.579714 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.579918 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.588756 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-scripts" (OuterVolumeSpecName: "scripts") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.589773 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d018105-8445-48e5-b826-3991f7fa306f-kube-api-access-wvppb" (OuterVolumeSpecName: "kube-api-access-wvppb") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "kube-api-access-wvppb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.590219 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10cb30cb-3025-4950-a987-ad172341a4ab-logs\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.612780 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.615241 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tglbc\" (UniqueName: \"kubernetes.io/projected/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-kube-api-access-tglbc\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.671102 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.673682 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-config-data\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.674002 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.674836 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.675555 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9dt\" (UniqueName: \"kubernetes.io/projected/10cb30cb-3025-4950-a987-ad172341a4ab-kube-api-access-fw9dt\") pod \"nova-metadata-0\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " pod="openstack/nova-metadata-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.679190 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvppb\" (UniqueName: \"kubernetes.io/projected/9d018105-8445-48e5-b826-3991f7fa306f-kube-api-access-wvppb\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.679226 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.679235 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.679250 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d018105-8445-48e5-b826-3991f7fa306f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.681734 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed3a53f5-c0d3-4b73-8af5-b28a227d3859-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ed3a53f5-c0d3-4b73-8af5-b28a227d3859\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.702630 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.723975 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.782583 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.805037 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.888066 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.889806 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-config-data" (OuterVolumeSpecName: "config-data") pod "9d018105-8445-48e5-b826-3991f7fa306f" (UID: "9d018105-8445-48e5-b826-3991f7fa306f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.922439 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="165db7d1-424a-47a1-919b-7d5a30145b21" path="/var/lib/kubelet/pods/165db7d1-424a-47a1-919b-7d5a30145b21/volumes" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.923574 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c783ce0e-0afa-4137-985b-9a1c584070e6" path="/var/lib/kubelet/pods/c783ce0e-0afa-4137-985b-9a1c584070e6/volumes" Feb 19 19:44:06 crc kubenswrapper[4787]: I0219 19:44:06.959872 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.000211 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d018105-8445-48e5-b826-3991f7fa306f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.140350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d018105-8445-48e5-b826-3991f7fa306f","Type":"ContainerDied","Data":"3cba8e8a7177d9c5047287bb556e4cb2db441153f92f7a652fba1bb380b45243"} Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.140682 4787 scope.go:117] "RemoveContainer" containerID="d46fbb6e3a3e5e6d7d1b6b070caae84b6a06f05f5b4a10d4664b0b799026d205" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.140408 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.149203 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerStarted","Data":"65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9"} Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.189409 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.207473 4787 scope.go:117] "RemoveContainer" containerID="ac541ba3bc5338343e46ad89c316565e94104b8d9e3b33b4860537c134bbea43" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.208589 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.228980 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:07 crc kubenswrapper[4787]: E0219 19:44:07.229539 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="sg-core" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.229560 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="sg-core" Feb 19 19:44:07 crc kubenswrapper[4787]: E0219 19:44:07.229585 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-notification-agent" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.229594 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-notification-agent" Feb 19 19:44:07 crc kubenswrapper[4787]: E0219 19:44:07.229605 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="proxy-httpd" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.229636 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="proxy-httpd" Feb 19 19:44:07 crc kubenswrapper[4787]: E0219 19:44:07.229661 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-central-agent" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.229669 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-central-agent" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.229966 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="proxy-httpd" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.229984 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-notification-agent" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.230011 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="sg-core" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.230029 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="ceilometer-central-agent" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.230676 4787 scope.go:117] "RemoveContainer" containerID="04412ad00499dfd222230358fb200963a2232e86f5776bda1874a1010f6b039c" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.232680 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.235518 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.237068 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.240815 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.261641 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.262913 4787 scope.go:117] "RemoveContainer" containerID="e6d6ca3e9e0217d4d565de753879d5e4c7b7a89464774351c41c7674c4768709" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.271153 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.272086 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.276132 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.279961 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.408283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-log-httpd\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.408369 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-run-httpd\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.408397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-scripts\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.408444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.408715 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.409140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf2nh\" (UniqueName: \"kubernetes.io/projected/bdaf941f-a4b8-41dd-ac90-a972358ff321-kube-api-access-bf2nh\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.409555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-config-data\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.434274 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:07 crc kubenswrapper[4787]: W0219 19:44:07.438403 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10cb30cb_3025_4950_a987_ad172341a4ab.slice/crio-2814f2afb83f2679052a87e3a6519d6f616f706017cc8eeb1cd68ea5b32ccf8f WatchSource:0}: Error finding container 2814f2afb83f2679052a87e3a6519d6f616f706017cc8eeb1cd68ea5b32ccf8f: Status 404 returned error can't find the container with id 2814f2afb83f2679052a87e3a6519d6f616f706017cc8eeb1cd68ea5b32ccf8f Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.511911 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf2nh\" (UniqueName: \"kubernetes.io/projected/bdaf941f-a4b8-41dd-ac90-a972358ff321-kube-api-access-bf2nh\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512291 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-config-data\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512366 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-log-httpd\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512409 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-run-httpd\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512431 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-scripts\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512465 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512504 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.512981 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-run-httpd\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.513234 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-log-httpd\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.518783 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.519003 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-config-data\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.519032 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.519374 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-scripts\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.531368 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf2nh\" (UniqueName: \"kubernetes.io/projected/bdaf941f-a4b8-41dd-ac90-a972358ff321-kube-api-access-bf2nh\") pod \"ceilometer-0\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " pod="openstack/ceilometer-0" Feb 19 19:44:07 crc kubenswrapper[4787]: I0219 19:44:07.581598 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.098985 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:08 crc kubenswrapper[4787]: W0219 19:44:08.110410 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdaf941f_a4b8_41dd_ac90_a972358ff321.slice/crio-528ec7f1b40e9a04bb387552110e955ce4d6c016a6b5a23909aa2aaca9c267d1 WatchSource:0}: Error finding container 528ec7f1b40e9a04bb387552110e955ce4d6c016a6b5a23909aa2aaca9c267d1: Status 404 returned error can't find the container with id 528ec7f1b40e9a04bb387552110e955ce4d6c016a6b5a23909aa2aaca9c267d1 Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.171429 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ed3a53f5-c0d3-4b73-8af5-b28a227d3859","Type":"ContainerStarted","Data":"b7f61a7e623956f0b6e3616f6cc61d352795aff043a8964bac1d5c34e81434d3"} Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.171474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ed3a53f5-c0d3-4b73-8af5-b28a227d3859","Type":"ContainerStarted","Data":"6e8ebba245d4bd1fb8bf54e0b847503a52cd1786cbef9057fb314096e1748ea0"} Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.194550 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.194530687 podStartE2EDuration="2.194530687s" podCreationTimestamp="2026-02-19 19:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:08.187635982 +0000 UTC m=+1515.978301924" watchObservedRunningTime="2026-02-19 19:44:08.194530687 +0000 UTC m=+1515.985196629" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.198256 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10cb30cb-3025-4950-a987-ad172341a4ab","Type":"ContainerStarted","Data":"1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321"} Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.198305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10cb30cb-3025-4950-a987-ad172341a4ab","Type":"ContainerStarted","Data":"2814f2afb83f2679052a87e3a6519d6f616f706017cc8eeb1cd68ea5b32ccf8f"} Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.200528 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerStarted","Data":"528ec7f1b40e9a04bb387552110e955ce4d6c016a6b5a23909aa2aaca9c267d1"} Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.201314 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.205032 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.373774 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf"] Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.376258 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.408316 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf"] Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.544779 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rx9\" (UniqueName: \"kubernetes.io/projected/ca72ab94-eca6-4c68-8571-dfbf22d53215-kube-api-access-d2rx9\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.545103 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.545185 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.545244 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-config\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.545405 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.545459 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.647187 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.647262 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-config\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.647373 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.647405 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.647470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rx9\" (UniqueName: \"kubernetes.io/projected/ca72ab94-eca6-4c68-8571-dfbf22d53215-kube-api-access-d2rx9\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.647494 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.648342 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.649467 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.649496 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.651715 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.652097 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-config\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.669449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rx9\" (UniqueName: \"kubernetes.io/projected/ca72ab94-eca6-4c68-8571-dfbf22d53215-kube-api-access-d2rx9\") pod \"dnsmasq-dns-6b7bbf7cf9-7ndsf\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.712725 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:08 crc kubenswrapper[4787]: I0219 19:44:08.929206 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d018105-8445-48e5-b826-3991f7fa306f" path="/var/lib/kubelet/pods/9d018105-8445-48e5-b826-3991f7fa306f/volumes" Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.218291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerStarted","Data":"1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993"} Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.225192 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerStarted","Data":"060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e"} Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.225336 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-listener" containerID="cri-o://060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e" gracePeriod=30 Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.225309 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-api" containerID="cri-o://9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183" gracePeriod=30 Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.225794 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-evaluator" containerID="cri-o://28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c" gracePeriod=30 Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.225847 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-notifier" containerID="cri-o://65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9" gracePeriod=30 Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.244960 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10cb30cb-3025-4950-a987-ad172341a4ab","Type":"ContainerStarted","Data":"be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67"} Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.276725 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.19555349 podStartE2EDuration="9.276699619s" podCreationTimestamp="2026-02-19 19:44:00 +0000 UTC" firstStartedPulling="2026-02-19 19:44:01.052485239 +0000 UTC m=+1508.843151181" lastFinishedPulling="2026-02-19 19:44:08.133631368 +0000 UTC m=+1515.924297310" observedRunningTime="2026-02-19 19:44:09.252164323 +0000 UTC m=+1517.042830265" watchObservedRunningTime="2026-02-19 19:44:09.276699619 +0000 UTC m=+1517.067365561" Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.326145 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.326123223 podStartE2EDuration="3.326123223s" podCreationTimestamp="2026-02-19 19:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:09.281086754 +0000 UTC m=+1517.071752696" watchObservedRunningTime="2026-02-19 19:44:09.326123223 +0000 UTC m=+1517.116789165" Feb 19 19:44:09 crc kubenswrapper[4787]: I0219 19:44:09.387797 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf"] Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.273298 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerID="28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c" exitCode=0 Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.273912 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerID="9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183" exitCode=0 Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.273390 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerDied","Data":"28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c"} Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.274026 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerDied","Data":"9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183"} Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.290050 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerID="1c9315015aba9bec3414ce09d2c8bb1e20c52429768cef81706a151c8c31ed4e" exitCode=0 Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.291230 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" event={"ID":"ca72ab94-eca6-4c68-8571-dfbf22d53215","Type":"ContainerDied","Data":"1c9315015aba9bec3414ce09d2c8bb1e20c52429768cef81706a151c8c31ed4e"} Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.291259 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" event={"ID":"ca72ab94-eca6-4c68-8571-dfbf22d53215","Type":"ContainerStarted","Data":"b34ebd93ad69e7e4a3f56ef38c546b99471a122fbd0ca0b833078e37e9021ce7"} Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.301859 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerStarted","Data":"bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f"} Feb 19 19:44:10 crc kubenswrapper[4787]: I0219 19:44:10.892301 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:44:10 crc kubenswrapper[4787]: E0219 19:44:10.898643 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.305267 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.314755 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" event={"ID":"ca72ab94-eca6-4c68-8571-dfbf22d53215","Type":"ContainerStarted","Data":"acb394ab72dbe204593957eaceb6ea30b1fdb11914501a13de1dcc9206422ee5"} Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.314886 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.317351 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerStarted","Data":"2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0"} Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.320568 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerID="65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9" exitCode=0 Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.320644 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerDied","Data":"65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9"} Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.320765 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-log" containerID="cri-o://db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21" gracePeriod=30 Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.320810 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-api" containerID="cri-o://f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258" gracePeriod=30 Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.337235 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" podStartSLOduration=3.337212547 podStartE2EDuration="3.337212547s" podCreationTimestamp="2026-02-19 19:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:11.332595806 +0000 UTC m=+1519.123261748" watchObservedRunningTime="2026-02-19 19:44:11.337212547 +0000 UTC m=+1519.127878489" Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.510650 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.702758 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.961153 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 19:44:11 crc kubenswrapper[4787]: I0219 19:44:11.961218 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 19:44:12 crc kubenswrapper[4787]: I0219 19:44:12.331818 4787 generic.go:334] "Generic (PLEG): container finished" podID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerID="db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21" exitCode=143 Feb 19 19:44:12 crc kubenswrapper[4787]: I0219 19:44:12.331905 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bcae43c8-8b12-49c9-9fc1-a386f0b52631","Type":"ContainerDied","Data":"db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21"} Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.346702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerStarted","Data":"b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25"} Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.347038 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.346933 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-central-agent" containerID="cri-o://1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993" gracePeriod=30 Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.347033 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-notification-agent" containerID="cri-o://bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f" gracePeriod=30 Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.346935 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="sg-core" containerID="cri-o://2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0" gracePeriod=30 Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.346960 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="proxy-httpd" containerID="cri-o://b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25" gracePeriod=30 Feb 19 19:44:13 crc kubenswrapper[4787]: I0219 19:44:13.376512 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.107021642 podStartE2EDuration="6.376486501s" podCreationTimestamp="2026-02-19 19:44:07 +0000 UTC" firstStartedPulling="2026-02-19 19:44:08.114785193 +0000 UTC m=+1515.905451135" lastFinishedPulling="2026-02-19 19:44:12.384250052 +0000 UTC m=+1520.174915994" observedRunningTime="2026-02-19 19:44:13.370249194 +0000 UTC m=+1521.160915146" watchObservedRunningTime="2026-02-19 19:44:13.376486501 +0000 UTC m=+1521.167152443" Feb 19 19:44:14 crc kubenswrapper[4787]: I0219 19:44:14.384564 4787 generic.go:334] "Generic (PLEG): container finished" podID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerID="b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25" exitCode=0 Feb 19 19:44:14 crc kubenswrapper[4787]: I0219 19:44:14.384929 4787 generic.go:334] "Generic (PLEG): container finished" podID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerID="2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0" exitCode=2 Feb 19 19:44:14 crc kubenswrapper[4787]: I0219 19:44:14.384945 4787 generic.go:334] "Generic (PLEG): container finished" podID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerID="bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f" exitCode=0 Feb 19 19:44:14 crc kubenswrapper[4787]: I0219 19:44:14.384968 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerDied","Data":"b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25"} Feb 19 19:44:14 crc kubenswrapper[4787]: I0219 19:44:14.384999 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerDied","Data":"2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0"} Feb 19 19:44:14 crc kubenswrapper[4787]: I0219 19:44:14.385011 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerDied","Data":"bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f"} Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.078440 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.220710 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcae43c8-8b12-49c9-9fc1-a386f0b52631-logs\") pod \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.220825 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-combined-ca-bundle\") pod \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.220919 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-config-data\") pod \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.221116 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n4c5\" (UniqueName: \"kubernetes.io/projected/bcae43c8-8b12-49c9-9fc1-a386f0b52631-kube-api-access-2n4c5\") pod \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\" (UID: \"bcae43c8-8b12-49c9-9fc1-a386f0b52631\") " Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.221221 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcae43c8-8b12-49c9-9fc1-a386f0b52631-logs" (OuterVolumeSpecName: "logs") pod "bcae43c8-8b12-49c9-9fc1-a386f0b52631" (UID: "bcae43c8-8b12-49c9-9fc1-a386f0b52631"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.222482 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bcae43c8-8b12-49c9-9fc1-a386f0b52631-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.226817 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcae43c8-8b12-49c9-9fc1-a386f0b52631-kube-api-access-2n4c5" (OuterVolumeSpecName: "kube-api-access-2n4c5") pod "bcae43c8-8b12-49c9-9fc1-a386f0b52631" (UID: "bcae43c8-8b12-49c9-9fc1-a386f0b52631"). InnerVolumeSpecName "kube-api-access-2n4c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.271040 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-config-data" (OuterVolumeSpecName: "config-data") pod "bcae43c8-8b12-49c9-9fc1-a386f0b52631" (UID: "bcae43c8-8b12-49c9-9fc1-a386f0b52631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.285922 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcae43c8-8b12-49c9-9fc1-a386f0b52631" (UID: "bcae43c8-8b12-49c9-9fc1-a386f0b52631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.325286 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.325323 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcae43c8-8b12-49c9-9fc1-a386f0b52631-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.325334 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n4c5\" (UniqueName: \"kubernetes.io/projected/bcae43c8-8b12-49c9-9fc1-a386f0b52631-kube-api-access-2n4c5\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.398711 4787 generic.go:334] "Generic (PLEG): container finished" podID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerID="f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258" exitCode=0 Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.398752 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bcae43c8-8b12-49c9-9fc1-a386f0b52631","Type":"ContainerDied","Data":"f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258"} Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.398777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bcae43c8-8b12-49c9-9fc1-a386f0b52631","Type":"ContainerDied","Data":"b8507d500a131e0fa5554c3cb4ea8052af6ba95ee2cf567839fa15ee205b9422"} Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.398780 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.398794 4787 scope.go:117] "RemoveContainer" containerID="f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.426890 4787 scope.go:117] "RemoveContainer" containerID="db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.445419 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.460041 4787 scope.go:117] "RemoveContainer" containerID="f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258" Feb 19 19:44:15 crc kubenswrapper[4787]: E0219 19:44:15.460770 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258\": container with ID starting with f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258 not found: ID does not exist" containerID="f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.460803 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258"} err="failed to get container status \"f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258\": rpc error: code = NotFound desc = could not find container \"f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258\": container with ID starting with f4c210fad238e2f952183ba29b7c1f3c8c4c14a0d39bac51a2cef9b91f3c3258 not found: ID does not exist" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.460825 4787 scope.go:117] "RemoveContainer" containerID="db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21" Feb 19 19:44:15 crc kubenswrapper[4787]: E0219 19:44:15.461298 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21\": container with ID starting with db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21 not found: ID does not exist" containerID="db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.461325 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21"} err="failed to get container status \"db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21\": rpc error: code = NotFound desc = could not find container \"db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21\": container with ID starting with db2bfd46746011839154b08ceafd030f8974c174325ef6f0668d40fbb5b33a21 not found: ID does not exist" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.462026 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.474474 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:15 crc kubenswrapper[4787]: E0219 19:44:15.475182 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-log" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.475202 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-log" Feb 19 19:44:15 crc kubenswrapper[4787]: E0219 19:44:15.475236 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-api" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.475245 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-api" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.475487 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-log" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.475507 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" containerName="nova-api-api" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.476746 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.478726 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.478985 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.479197 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.493403 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.632101 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-config-data\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.632283 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-public-tls-certs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.632379 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-logs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.632421 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsx9\" (UniqueName: \"kubernetes.io/projected/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-kube-api-access-bpsx9\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.632494 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.632578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.734603 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-public-tls-certs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.734702 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-logs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.734728 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpsx9\" (UniqueName: \"kubernetes.io/projected/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-kube-api-access-bpsx9\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.734759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.734778 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.734887 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-config-data\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.735684 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-logs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.739419 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.739760 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-public-tls-certs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.740214 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.747540 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-config-data\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.754251 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpsx9\" (UniqueName: \"kubernetes.io/projected/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-kube-api-access-bpsx9\") pod \"nova-api-0\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " pod="openstack/nova-api-0" Feb 19 19:44:15 crc kubenswrapper[4787]: I0219 19:44:15.799852 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:16 crc kubenswrapper[4787]: I0219 19:44:16.293441 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:16 crc kubenswrapper[4787]: W0219 19:44:16.559168 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7e68ec2_4fd9_458d_98b5_c77049a83c5f.slice/crio-f48b61bee96103a1f0b38f319910eb6d0077fb2ee3e5de58d65a5c8f8185c17b WatchSource:0}: Error finding container f48b61bee96103a1f0b38f319910eb6d0077fb2ee3e5de58d65a5c8f8185c17b: Status 404 returned error can't find the container with id f48b61bee96103a1f0b38f319910eb6d0077fb2ee3e5de58d65a5c8f8185c17b Feb 19 19:44:16 crc kubenswrapper[4787]: I0219 19:44:16.703021 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:16 crc kubenswrapper[4787]: I0219 19:44:16.735120 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:16 crc kubenswrapper[4787]: I0219 19:44:16.908562 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcae43c8-8b12-49c9-9fc1-a386f0b52631" path="/var/lib/kubelet/pods/bcae43c8-8b12-49c9-9fc1-a386f0b52631/volumes" Feb 19 19:44:16 crc kubenswrapper[4787]: I0219 19:44:16.961328 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 19:44:16 crc kubenswrapper[4787]: I0219 19:44:16.961375 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.200152 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381196 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-log-httpd\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381305 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-combined-ca-bundle\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381372 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2nh\" (UniqueName: \"kubernetes.io/projected/bdaf941f-a4b8-41dd-ac90-a972358ff321-kube-api-access-bf2nh\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381477 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-run-httpd\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381534 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-sg-core-conf-yaml\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381652 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-config-data\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381692 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-scripts\") pod \"bdaf941f-a4b8-41dd-ac90-a972358ff321\" (UID: \"bdaf941f-a4b8-41dd-ac90-a972358ff321\") " Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381765 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.381858 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.382398 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.382422 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bdaf941f-a4b8-41dd-ac90-a972358ff321-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.388772 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdaf941f-a4b8-41dd-ac90-a972358ff321-kube-api-access-bf2nh" (OuterVolumeSpecName: "kube-api-access-bf2nh") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "kube-api-access-bf2nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.389797 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-scripts" (OuterVolumeSpecName: "scripts") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.431413 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.433149 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7e68ec2-4fd9-458d-98b5-c77049a83c5f","Type":"ContainerStarted","Data":"f48b61bee96103a1f0b38f319910eb6d0077fb2ee3e5de58d65a5c8f8185c17b"} Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.439859 4787 generic.go:334] "Generic (PLEG): container finished" podID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerID="1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993" exitCode=0 Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.440425 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.441184 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerDied","Data":"1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993"} Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.441253 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bdaf941f-a4b8-41dd-ac90-a972358ff321","Type":"ContainerDied","Data":"528ec7f1b40e9a04bb387552110e955ce4d6c016a6b5a23909aa2aaca9c267d1"} Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.441282 4787 scope.go:117] "RemoveContainer" containerID="b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.474340 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.484395 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2nh\" (UniqueName: \"kubernetes.io/projected/bdaf941f-a4b8-41dd-ac90-a972358ff321-kube-api-access-bf2nh\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.484439 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.484453 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.554800 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.586992 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.601402 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-config-data" (OuterVolumeSpecName: "config-data") pod "bdaf941f-a4b8-41dd-ac90-a972358ff321" (UID: "bdaf941f-a4b8-41dd-ac90-a972358ff321"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.613819 4787 scope.go:117] "RemoveContainer" containerID="2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.652740 4787 scope.go:117] "RemoveContainer" containerID="bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.684730 4787 scope.go:117] "RemoveContainer" containerID="1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.689007 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdaf941f-a4b8-41dd-ac90-a972358ff321-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.715988 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sbjcl"] Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.716660 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-notification-agent" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.716682 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-notification-agent" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.716728 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-central-agent" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.716737 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-central-agent" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.716762 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="sg-core" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.716774 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="sg-core" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.716803 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="proxy-httpd" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.716810 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="proxy-httpd" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.717080 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="sg-core" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.717109 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-central-agent" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.717124 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="proxy-httpd" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.717138 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" containerName="ceilometer-notification-agent" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.718205 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.721095 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.721796 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.731245 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sbjcl"] Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.731862 4787 scope.go:117] "RemoveContainer" containerID="b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.732426 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25\": container with ID starting with b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25 not found: ID does not exist" containerID="b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.732459 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25"} err="failed to get container status \"b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25\": rpc error: code = NotFound desc = could not find container \"b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25\": container with ID starting with b768716aeabd2fcf3351836cddf4dacee75d63e2f7c8cc49d61d94d5f48bbb25 not found: ID does not exist" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.732480 4787 scope.go:117] "RemoveContainer" containerID="2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.733131 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0\": container with ID starting with 2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0 not found: ID does not exist" containerID="2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.733169 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0"} err="failed to get container status \"2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0\": rpc error: code = NotFound desc = could not find container \"2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0\": container with ID starting with 2cb867c5b79e36be25fa1dbf50c2744bcf8971205139f7b4217d3d5640011ed0 not found: ID does not exist" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.733187 4787 scope.go:117] "RemoveContainer" containerID="bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.735908 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f\": container with ID starting with bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f not found: ID does not exist" containerID="bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.735935 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f"} err="failed to get container status \"bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f\": rpc error: code = NotFound desc = could not find container \"bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f\": container with ID starting with bad67c46c73693e0c2b45ccd0159684f6376891ee658b97715abfc58d9f2a71f not found: ID does not exist" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.735951 4787 scope.go:117] "RemoveContainer" containerID="1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993" Feb 19 19:44:17 crc kubenswrapper[4787]: E0219 19:44:17.736209 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993\": container with ID starting with 1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993 not found: ID does not exist" containerID="1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.736234 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993"} err="failed to get container status \"1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993\": rpc error: code = NotFound desc = could not find container \"1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993\": container with ID starting with 1aea03e0f123a82dbb680dd1686bc464d7b8e646262d380af83f9a4cbdd27993 not found: ID does not exist" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.836830 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.862277 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.876629 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.881790 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.900399 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlp55\" (UniqueName: \"kubernetes.io/projected/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-kube-api-access-xlp55\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.900529 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.900765 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.901114 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-config-data\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.901224 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-scripts\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.902410 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:44:17 crc kubenswrapper[4787]: I0219 19:44:17.902965 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003253 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-config-data\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003587 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003733 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003866 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-scripts\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003902 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-run-httpd\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003940 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-config-data\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.003985 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmxv\" (UniqueName: \"kubernetes.io/projected/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-kube-api-access-wcmxv\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.004028 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-scripts\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.005279 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlp55\" (UniqueName: \"kubernetes.io/projected/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-kube-api-access-xlp55\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.005332 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-log-httpd\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.005478 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.009582 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-scripts\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.012312 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-config-data\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.015309 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.027152 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlp55\" (UniqueName: \"kubernetes.io/projected/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-kube-api-access-xlp55\") pod \"nova-cell1-cell-mapping-sbjcl\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.047279 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.048836 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.049091 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107292 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107381 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-scripts\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107481 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-run-httpd\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107537 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcmxv\" (UniqueName: \"kubernetes.io/projected/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-kube-api-access-wcmxv\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107756 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-log-httpd\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.107811 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-config-data\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.108257 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-run-httpd\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.108485 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-log-httpd\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.111602 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.113376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-scripts\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.114275 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-config-data\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.115409 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.124191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcmxv\" (UniqueName: \"kubernetes.io/projected/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-kube-api-access-wcmxv\") pod \"ceilometer-0\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.260414 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.464273 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7e68ec2-4fd9-458d-98b5-c77049a83c5f","Type":"ContainerStarted","Data":"c8e2b718d9eec8289cee23d7aa4a8e36b33681c5662a93628a54bb6ff883adbe"} Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.464677 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7e68ec2-4fd9-458d-98b5-c77049a83c5f","Type":"ContainerStarted","Data":"63a41a953039c4e18c887aa0fb11d2421e0d8d9354cf360211cef74722cfd23c"} Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.489750 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.489734594 podStartE2EDuration="3.489734594s" podCreationTimestamp="2026-02-19 19:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:18.489284001 +0000 UTC m=+1526.279949943" watchObservedRunningTime="2026-02-19 19:44:18.489734594 +0000 UTC m=+1526.280400536" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.656802 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sbjcl"] Feb 19 19:44:18 crc kubenswrapper[4787]: W0219 19:44:18.664796 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ab8eaee_1764_424d_bbdf_d94f96dc6aa6.slice/crio-d1e9d616d7208b156426484ee859b2ff27cd41b8804495ef900d5249ae82f820 WatchSource:0}: Error finding container d1e9d616d7208b156426484ee859b2ff27cd41b8804495ef900d5249ae82f820: Status 404 returned error can't find the container with id d1e9d616d7208b156426484ee859b2ff27cd41b8804495ef900d5249ae82f820 Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.715862 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.821661 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7wmdr"] Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.822259 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" containerName="dnsmasq-dns" containerID="cri-o://c47960dc265711777ca13539eee91672fcf10adedcd146de619bc3264880052a" gracePeriod=10 Feb 19 19:44:18 crc kubenswrapper[4787]: I0219 19:44:18.915385 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdaf941f-a4b8-41dd-ac90-a972358ff321" path="/var/lib/kubelet/pods/bdaf941f-a4b8-41dd-ac90-a972358ff321/volumes" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.010325 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.511818 4787 generic.go:334] "Generic (PLEG): container finished" podID="acb65c9d-9b2c-4294-9255-3707b9582008" containerID="c47960dc265711777ca13539eee91672fcf10adedcd146de619bc3264880052a" exitCode=0 Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.512761 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" event={"ID":"acb65c9d-9b2c-4294-9255-3707b9582008","Type":"ContainerDied","Data":"c47960dc265711777ca13539eee91672fcf10adedcd146de619bc3264880052a"} Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.535703 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sbjcl" event={"ID":"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6","Type":"ContainerStarted","Data":"3cb2b2027e7bb4e899aeedcb7aaa2661e6979fdcfe37d211def7337208fe1320"} Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.535800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sbjcl" event={"ID":"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6","Type":"ContainerStarted","Data":"d1e9d616d7208b156426484ee859b2ff27cd41b8804495ef900d5249ae82f820"} Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.547735 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerStarted","Data":"546211aea248f1a04e2c1fe4daeb7b3860b2ecbd30bdcdd9e76d7ffb4970b512"} Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.583043 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sbjcl" podStartSLOduration=2.583012891 podStartE2EDuration="2.583012891s" podCreationTimestamp="2026-02-19 19:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:19.5551488 +0000 UTC m=+1527.345814742" watchObservedRunningTime="2026-02-19 19:44:19.583012891 +0000 UTC m=+1527.373678833" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.704426 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.870986 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-sb\") pod \"acb65c9d-9b2c-4294-9255-3707b9582008\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.871150 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-config\") pod \"acb65c9d-9b2c-4294-9255-3707b9582008\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.871227 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvcn\" (UniqueName: \"kubernetes.io/projected/acb65c9d-9b2c-4294-9255-3707b9582008-kube-api-access-4zvcn\") pod \"acb65c9d-9b2c-4294-9255-3707b9582008\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.871284 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-swift-storage-0\") pod \"acb65c9d-9b2c-4294-9255-3707b9582008\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.871308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-svc\") pod \"acb65c9d-9b2c-4294-9255-3707b9582008\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.871328 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-nb\") pod \"acb65c9d-9b2c-4294-9255-3707b9582008\" (UID: \"acb65c9d-9b2c-4294-9255-3707b9582008\") " Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.889327 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acb65c9d-9b2c-4294-9255-3707b9582008-kube-api-access-4zvcn" (OuterVolumeSpecName: "kube-api-access-4zvcn") pod "acb65c9d-9b2c-4294-9255-3707b9582008" (UID: "acb65c9d-9b2c-4294-9255-3707b9582008"). InnerVolumeSpecName "kube-api-access-4zvcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.947376 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "acb65c9d-9b2c-4294-9255-3707b9582008" (UID: "acb65c9d-9b2c-4294-9255-3707b9582008"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.949432 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-config" (OuterVolumeSpecName: "config") pod "acb65c9d-9b2c-4294-9255-3707b9582008" (UID: "acb65c9d-9b2c-4294-9255-3707b9582008"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.962426 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "acb65c9d-9b2c-4294-9255-3707b9582008" (UID: "acb65c9d-9b2c-4294-9255-3707b9582008"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.975196 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.975243 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvcn\" (UniqueName: \"kubernetes.io/projected/acb65c9d-9b2c-4294-9255-3707b9582008-kube-api-access-4zvcn\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.975276 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.975285 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.988916 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "acb65c9d-9b2c-4294-9255-3707b9582008" (UID: "acb65c9d-9b2c-4294-9255-3707b9582008"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:44:19 crc kubenswrapper[4787]: I0219 19:44:19.989576 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "acb65c9d-9b2c-4294-9255-3707b9582008" (UID: "acb65c9d-9b2c-4294-9255-3707b9582008"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.078131 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.078170 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/acb65c9d-9b2c-4294-9255-3707b9582008-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.558701 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerStarted","Data":"4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9"} Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.561207 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.561202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7wmdr" event={"ID":"acb65c9d-9b2c-4294-9255-3707b9582008","Type":"ContainerDied","Data":"55f3b9d3103cacab24ac351384f048bd1cb346e9d3b100f34f877154605eec39"} Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.561268 4787 scope.go:117] "RemoveContainer" containerID="c47960dc265711777ca13539eee91672fcf10adedcd146de619bc3264880052a" Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.599272 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7wmdr"] Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.611395 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7wmdr"] Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.657510 4787 scope.go:117] "RemoveContainer" containerID="bd75f0b4fcf65d5a1e99195daa8467fa687d23fbb7f905de39c506bbe6eb8d3a" Feb 19 19:44:20 crc kubenswrapper[4787]: I0219 19:44:20.907584 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" path="/var/lib/kubelet/pods/acb65c9d-9b2c-4294-9255-3707b9582008/volumes" Feb 19 19:44:21 crc kubenswrapper[4787]: I0219 19:44:21.578823 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerStarted","Data":"17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f"} Feb 19 19:44:21 crc kubenswrapper[4787]: I0219 19:44:21.578873 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerStarted","Data":"b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af"} Feb 19 19:44:24 crc kubenswrapper[4787]: I0219 19:44:24.615960 4787 generic.go:334] "Generic (PLEG): container finished" podID="7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" containerID="3cb2b2027e7bb4e899aeedcb7aaa2661e6979fdcfe37d211def7337208fe1320" exitCode=0 Feb 19 19:44:24 crc kubenswrapper[4787]: I0219 19:44:24.616063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sbjcl" event={"ID":"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6","Type":"ContainerDied","Data":"3cb2b2027e7bb4e899aeedcb7aaa2661e6979fdcfe37d211def7337208fe1320"} Feb 19 19:44:24 crc kubenswrapper[4787]: I0219 19:44:24.622883 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerStarted","Data":"0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a"} Feb 19 19:44:24 crc kubenswrapper[4787]: I0219 19:44:24.623069 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:44:24 crc kubenswrapper[4787]: I0219 19:44:24.660987 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.512490929 podStartE2EDuration="7.660969242s" podCreationTimestamp="2026-02-19 19:44:17 +0000 UTC" firstStartedPulling="2026-02-19 19:44:19.03645048 +0000 UTC m=+1526.827116422" lastFinishedPulling="2026-02-19 19:44:24.184928793 +0000 UTC m=+1531.975594735" observedRunningTime="2026-02-19 19:44:24.653047477 +0000 UTC m=+1532.443713419" watchObservedRunningTime="2026-02-19 19:44:24.660969242 +0000 UTC m=+1532.451635184" Feb 19 19:44:24 crc kubenswrapper[4787]: I0219 19:44:24.896324 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:44:24 crc kubenswrapper[4787]: E0219 19:44:24.896593 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:44:25 crc kubenswrapper[4787]: I0219 19:44:25.800463 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:44:25 crc kubenswrapper[4787]: I0219 19:44:25.800793 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.149082 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.228625 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-combined-ca-bundle\") pod \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.228814 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-config-data\") pod \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.228857 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlp55\" (UniqueName: \"kubernetes.io/projected/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-kube-api-access-xlp55\") pod \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.228880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-scripts\") pod \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\" (UID: \"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6\") " Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.234480 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-scripts" (OuterVolumeSpecName: "scripts") pod "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" (UID: "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.241823 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-kube-api-access-xlp55" (OuterVolumeSpecName: "kube-api-access-xlp55") pod "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" (UID: "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6"). InnerVolumeSpecName "kube-api-access-xlp55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.268258 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-config-data" (OuterVolumeSpecName: "config-data") pod "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" (UID: "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.273624 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" (UID: "7ab8eaee-1764-424d-bbdf-d94f96dc6aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.332030 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.332067 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.332079 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlp55\" (UniqueName: \"kubernetes.io/projected/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-kube-api-access-xlp55\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.332091 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.656444 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sbjcl" event={"ID":"7ab8eaee-1764-424d-bbdf-d94f96dc6aa6","Type":"ContainerDied","Data":"d1e9d616d7208b156426484ee859b2ff27cd41b8804495ef900d5249ae82f820"} Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.656481 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1e9d616d7208b156426484ee859b2ff27cd41b8804495ef900d5249ae82f820" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.656533 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sbjcl" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.808044 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.815860 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.871104 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.871513 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9d5b581a-832d-4fa1-8a84-29f54d14752b" containerName="nova-scheduler-scheduler" containerID="cri-o://38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" gracePeriod=30 Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.910398 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.910603 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-log" containerID="cri-o://63a41a953039c4e18c887aa0fb11d2421e0d8d9354cf360211cef74722cfd23c" gracePeriod=30 Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.910714 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-api" containerID="cri-o://c8e2b718d9eec8289cee23d7aa4a8e36b33681c5662a93628a54bb6ff883adbe" gracePeriod=30 Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.911059 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.911201 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-log" containerID="cri-o://1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321" gracePeriod=30 Feb 19 19:44:26 crc kubenswrapper[4787]: I0219 19:44:26.911298 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-metadata" containerID="cri-o://be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67" gracePeriod=30 Feb 19 19:44:27 crc kubenswrapper[4787]: I0219 19:44:27.671325 4787 generic.go:334] "Generic (PLEG): container finished" podID="10cb30cb-3025-4950-a987-ad172341a4ab" containerID="1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321" exitCode=143 Feb 19 19:44:27 crc kubenswrapper[4787]: I0219 19:44:27.671391 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10cb30cb-3025-4950-a987-ad172341a4ab","Type":"ContainerDied","Data":"1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321"} Feb 19 19:44:27 crc kubenswrapper[4787]: I0219 19:44:27.674237 4787 generic.go:334] "Generic (PLEG): container finished" podID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerID="63a41a953039c4e18c887aa0fb11d2421e0d8d9354cf360211cef74722cfd23c" exitCode=143 Feb 19 19:44:27 crc kubenswrapper[4787]: I0219 19:44:27.674279 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7e68ec2-4fd9-458d-98b5-c77049a83c5f","Type":"ContainerDied","Data":"63a41a953039c4e18c887aa0fb11d2421e0d8d9354cf360211cef74722cfd23c"} Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.308192 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b is running failed: container process not found" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.309002 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b is running failed: container process not found" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.309220 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b is running failed: container process not found" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.309248 4787 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9d5b581a-832d-4fa1-8a84-29f54d14752b" containerName="nova-scheduler-scheduler" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.322004 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.394585 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-config-data\") pod \"9d5b581a-832d-4fa1-8a84-29f54d14752b\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.394818 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-combined-ca-bundle\") pod \"9d5b581a-832d-4fa1-8a84-29f54d14752b\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.394856 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8lwr\" (UniqueName: \"kubernetes.io/projected/9d5b581a-832d-4fa1-8a84-29f54d14752b-kube-api-access-q8lwr\") pod \"9d5b581a-832d-4fa1-8a84-29f54d14752b\" (UID: \"9d5b581a-832d-4fa1-8a84-29f54d14752b\") " Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.404939 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d5b581a-832d-4fa1-8a84-29f54d14752b-kube-api-access-q8lwr" (OuterVolumeSpecName: "kube-api-access-q8lwr") pod "9d5b581a-832d-4fa1-8a84-29f54d14752b" (UID: "9d5b581a-832d-4fa1-8a84-29f54d14752b"). InnerVolumeSpecName "kube-api-access-q8lwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.429489 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-config-data" (OuterVolumeSpecName: "config-data") pod "9d5b581a-832d-4fa1-8a84-29f54d14752b" (UID: "9d5b581a-832d-4fa1-8a84-29f54d14752b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.464812 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d5b581a-832d-4fa1-8a84-29f54d14752b" (UID: "9d5b581a-832d-4fa1-8a84-29f54d14752b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.497254 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.497285 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d5b581a-832d-4fa1-8a84-29f54d14752b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.497295 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8lwr\" (UniqueName: \"kubernetes.io/projected/9d5b581a-832d-4fa1-8a84-29f54d14752b-kube-api-access-q8lwr\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.692109 4787 generic.go:334] "Generic (PLEG): container finished" podID="9d5b581a-832d-4fa1-8a84-29f54d14752b" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" exitCode=0 Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.692183 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d5b581a-832d-4fa1-8a84-29f54d14752b","Type":"ContainerDied","Data":"38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b"} Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.692216 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9d5b581a-832d-4fa1-8a84-29f54d14752b","Type":"ContainerDied","Data":"ba8264547a0bf0512d8c010301e9d1929891ac5c15e06295597c757c55696243"} Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.692261 4787 scope.go:117] "RemoveContainer" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.692469 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.735927 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.738867 4787 scope.go:117] "RemoveContainer" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.739919 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b\": container with ID starting with 38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b not found: ID does not exist" containerID="38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.740058 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b"} err="failed to get container status \"38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b\": rpc error: code = NotFound desc = could not find container \"38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b\": container with ID starting with 38145d8128b71a3e07fc4258df68c945c04af3241ed67de129715a762206323b not found: ID does not exist" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.756409 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.783892 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.784418 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" containerName="dnsmasq-dns" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784436 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" containerName="dnsmasq-dns" Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.784452 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d5b581a-832d-4fa1-8a84-29f54d14752b" containerName="nova-scheduler-scheduler" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784458 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d5b581a-832d-4fa1-8a84-29f54d14752b" containerName="nova-scheduler-scheduler" Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.784499 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" containerName="init" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784505 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" containerName="init" Feb 19 19:44:28 crc kubenswrapper[4787]: E0219 19:44:28.784518 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" containerName="nova-manage" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784526 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" containerName="nova-manage" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784762 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb65c9d-9b2c-4294-9255-3707b9582008" containerName="dnsmasq-dns" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784774 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d5b581a-832d-4fa1-8a84-29f54d14752b" containerName="nova-scheduler-scheduler" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.784794 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" containerName="nova-manage" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.785670 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.788144 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.811014 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.904540 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d5b581a-832d-4fa1-8a84-29f54d14752b" path="/var/lib/kubelet/pods/9d5b581a-832d-4fa1-8a84-29f54d14752b/volumes" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.907562 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32df70fc-d0d1-42ed-b37f-3ba71192a187-config-data\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.907678 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn258\" (UniqueName: \"kubernetes.io/projected/32df70fc-d0d1-42ed-b37f-3ba71192a187-kube-api-access-tn258\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:28 crc kubenswrapper[4787]: I0219 19:44:28.907718 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32df70fc-d0d1-42ed-b37f-3ba71192a187-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.009768 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32df70fc-d0d1-42ed-b37f-3ba71192a187-config-data\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.009846 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn258\" (UniqueName: \"kubernetes.io/projected/32df70fc-d0d1-42ed-b37f-3ba71192a187-kube-api-access-tn258\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.009922 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32df70fc-d0d1-42ed-b37f-3ba71192a187-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.014475 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32df70fc-d0d1-42ed-b37f-3ba71192a187-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.022425 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32df70fc-d0d1-42ed-b37f-3ba71192a187-config-data\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.025986 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn258\" (UniqueName: \"kubernetes.io/projected/32df70fc-d0d1-42ed-b37f-3ba71192a187-kube-api-access-tn258\") pod \"nova-scheduler-0\" (UID: \"32df70fc-d0d1-42ed-b37f-3ba71192a187\") " pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.120952 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.622547 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 19:44:29 crc kubenswrapper[4787]: W0219 19:44:29.624101 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32df70fc_d0d1_42ed_b37f_3ba71192a187.slice/crio-fa8536dfabcab016885419a307fd50f271fbf1a7b417fc13282e51d67186d81c WatchSource:0}: Error finding container fa8536dfabcab016885419a307fd50f271fbf1a7b417fc13282e51d67186d81c: Status 404 returned error can't find the container with id fa8536dfabcab016885419a307fd50f271fbf1a7b417fc13282e51d67186d81c Feb 19 19:44:29 crc kubenswrapper[4787]: I0219 19:44:29.719677 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"32df70fc-d0d1-42ed-b37f-3ba71192a187","Type":"ContainerStarted","Data":"fa8536dfabcab016885419a307fd50f271fbf1a7b417fc13282e51d67186d81c"} Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.654825 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.732127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"32df70fc-d0d1-42ed-b37f-3ba71192a187","Type":"ContainerStarted","Data":"5961274b754278bac3a57d0d7eb1ba89f427de44d705af7eee6d84e89e1f8c47"} Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.735637 4787 generic.go:334] "Generic (PLEG): container finished" podID="10cb30cb-3025-4950-a987-ad172341a4ab" containerID="be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67" exitCode=0 Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.735682 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10cb30cb-3025-4950-a987-ad172341a4ab","Type":"ContainerDied","Data":"be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67"} Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.735714 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10cb30cb-3025-4950-a987-ad172341a4ab","Type":"ContainerDied","Data":"2814f2afb83f2679052a87e3a6519d6f616f706017cc8eeb1cd68ea5b32ccf8f"} Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.735735 4787 scope.go:117] "RemoveContainer" containerID="be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.735874 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.752995 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10cb30cb-3025-4950-a987-ad172341a4ab-logs\") pod \"10cb30cb-3025-4950-a987-ad172341a4ab\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.753405 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-combined-ca-bundle\") pod \"10cb30cb-3025-4950-a987-ad172341a4ab\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.753505 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-config-data\") pod \"10cb30cb-3025-4950-a987-ad172341a4ab\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.753571 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-nova-metadata-tls-certs\") pod \"10cb30cb-3025-4950-a987-ad172341a4ab\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.753654 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10cb30cb-3025-4950-a987-ad172341a4ab-logs" (OuterVolumeSpecName: "logs") pod "10cb30cb-3025-4950-a987-ad172341a4ab" (UID: "10cb30cb-3025-4950-a987-ad172341a4ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.753904 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9dt\" (UniqueName: \"kubernetes.io/projected/10cb30cb-3025-4950-a987-ad172341a4ab-kube-api-access-fw9dt\") pod \"10cb30cb-3025-4950-a987-ad172341a4ab\" (UID: \"10cb30cb-3025-4950-a987-ad172341a4ab\") " Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.754661 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10cb30cb-3025-4950-a987-ad172341a4ab-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.756376 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7563601760000003 podStartE2EDuration="2.756360176s" podCreationTimestamp="2026-02-19 19:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:30.747479904 +0000 UTC m=+1538.538145856" watchObservedRunningTime="2026-02-19 19:44:30.756360176 +0000 UTC m=+1538.547026108" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.759555 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10cb30cb-3025-4950-a987-ad172341a4ab-kube-api-access-fw9dt" (OuterVolumeSpecName: "kube-api-access-fw9dt") pod "10cb30cb-3025-4950-a987-ad172341a4ab" (UID: "10cb30cb-3025-4950-a987-ad172341a4ab"). InnerVolumeSpecName "kube-api-access-fw9dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.769039 4787 scope.go:117] "RemoveContainer" containerID="1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.786028 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-config-data" (OuterVolumeSpecName: "config-data") pod "10cb30cb-3025-4950-a987-ad172341a4ab" (UID: "10cb30cb-3025-4950-a987-ad172341a4ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.786900 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10cb30cb-3025-4950-a987-ad172341a4ab" (UID: "10cb30cb-3025-4950-a987-ad172341a4ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.812931 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "10cb30cb-3025-4950-a987-ad172341a4ab" (UID: "10cb30cb-3025-4950-a987-ad172341a4ab"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.857011 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.857052 4787 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.857066 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9dt\" (UniqueName: \"kubernetes.io/projected/10cb30cb-3025-4950-a987-ad172341a4ab-kube-api-access-fw9dt\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.857078 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb30cb-3025-4950-a987-ad172341a4ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.893246 4787 scope.go:117] "RemoveContainer" containerID="be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67" Feb 19 19:44:30 crc kubenswrapper[4787]: E0219 19:44:30.893701 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67\": container with ID starting with be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67 not found: ID does not exist" containerID="be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.893757 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67"} err="failed to get container status \"be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67\": rpc error: code = NotFound desc = could not find container \"be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67\": container with ID starting with be0f401000071ff649af3582288205113823cc9110ed353030c61aa654e86e67 not found: ID does not exist" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.893787 4787 scope.go:117] "RemoveContainer" containerID="1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321" Feb 19 19:44:30 crc kubenswrapper[4787]: E0219 19:44:30.894093 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321\": container with ID starting with 1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321 not found: ID does not exist" containerID="1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321" Feb 19 19:44:30 crc kubenswrapper[4787]: I0219 19:44:30.894198 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321"} err="failed to get container status \"1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321\": rpc error: code = NotFound desc = could not find container \"1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321\": container with ID starting with 1bb4f4e7efdf673b81c861e7a936f792fcf0871b1550fcfc29fe9c0db15b9321 not found: ID does not exist" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.084187 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.105124 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.116644 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:31 crc kubenswrapper[4787]: E0219 19:44:31.117318 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-metadata" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.117421 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-metadata" Feb 19 19:44:31 crc kubenswrapper[4787]: E0219 19:44:31.117523 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-log" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.117574 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-log" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.118112 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-metadata" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.118237 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" containerName="nova-metadata-log" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.119949 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.124703 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.125044 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.131672 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.266072 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5f8\" (UniqueName: \"kubernetes.io/projected/7a497801-b864-4180-bcc6-d6b7f3c5b35e-kube-api-access-qz5f8\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.266176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-config-data\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.266209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.266305 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.266339 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a497801-b864-4180-bcc6-d6b7f3c5b35e-logs\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.368467 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-config-data\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.368517 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.368657 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.368697 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a497801-b864-4180-bcc6-d6b7f3c5b35e-logs\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.368776 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz5f8\" (UniqueName: \"kubernetes.io/projected/7a497801-b864-4180-bcc6-d6b7f3c5b35e-kube-api-access-qz5f8\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.369582 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a497801-b864-4180-bcc6-d6b7f3c5b35e-logs\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.372313 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.374759 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.381079 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a497801-b864-4180-bcc6-d6b7f3c5b35e-config-data\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.386448 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz5f8\" (UniqueName: \"kubernetes.io/projected/7a497801-b864-4180-bcc6-d6b7f3c5b35e-kube-api-access-qz5f8\") pod \"nova-metadata-0\" (UID: \"7a497801-b864-4180-bcc6-d6b7f3c5b35e\") " pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.446337 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 19:44:31 crc kubenswrapper[4787]: W0219 19:44:31.904353 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a497801_b864_4180_bcc6_d6b7f3c5b35e.slice/crio-f6605520a7af89e333232f29dd8891c835c2afca6fec14cbaedd39045732d4f5 WatchSource:0}: Error finding container f6605520a7af89e333232f29dd8891c835c2afca6fec14cbaedd39045732d4f5: Status 404 returned error can't find the container with id f6605520a7af89e333232f29dd8891c835c2afca6fec14cbaedd39045732d4f5 Feb 19 19:44:31 crc kubenswrapper[4787]: I0219 19:44:31.905380 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.761285 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a497801-b864-4180-bcc6-d6b7f3c5b35e","Type":"ContainerStarted","Data":"d5835926644ff94744d24da0233985cd9bc8125c1f981032d461726bed48685c"} Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.761717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a497801-b864-4180-bcc6-d6b7f3c5b35e","Type":"ContainerStarted","Data":"566a49a55bad2558571b28cd76f35be479c9b3c44bc1b6e37863ba129f4fcd95"} Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.761730 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a497801-b864-4180-bcc6-d6b7f3c5b35e","Type":"ContainerStarted","Data":"f6605520a7af89e333232f29dd8891c835c2afca6fec14cbaedd39045732d4f5"} Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.764420 4787 generic.go:334] "Generic (PLEG): container finished" podID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerID="c8e2b718d9eec8289cee23d7aa4a8e36b33681c5662a93628a54bb6ff883adbe" exitCode=0 Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.764466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7e68ec2-4fd9-458d-98b5-c77049a83c5f","Type":"ContainerDied","Data":"c8e2b718d9eec8289cee23d7aa4a8e36b33681c5662a93628a54bb6ff883adbe"} Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.764495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7e68ec2-4fd9-458d-98b5-c77049a83c5f","Type":"ContainerDied","Data":"f48b61bee96103a1f0b38f319910eb6d0077fb2ee3e5de58d65a5c8f8185c17b"} Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.764567 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48b61bee96103a1f0b38f319910eb6d0077fb2ee3e5de58d65a5c8f8185c17b" Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.795356 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.795338782 podStartE2EDuration="1.795338782s" podCreationTimestamp="2026-02-19 19:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:32.784508955 +0000 UTC m=+1540.575174897" watchObservedRunningTime="2026-02-19 19:44:32.795338782 +0000 UTC m=+1540.586004724" Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.855144 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:32 crc kubenswrapper[4787]: I0219 19:44:32.919101 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10cb30cb-3025-4950-a987-ad172341a4ab" path="/var/lib/kubelet/pods/10cb30cb-3025-4950-a987-ad172341a4ab/volumes" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007173 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-config-data\") pod \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007235 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-combined-ca-bundle\") pod \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007359 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-public-tls-certs\") pod \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007460 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-logs\") pod \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007592 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-internal-tls-certs\") pod \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007721 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpsx9\" (UniqueName: \"kubernetes.io/projected/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-kube-api-access-bpsx9\") pod \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\" (UID: \"e7e68ec2-4fd9-458d-98b5-c77049a83c5f\") " Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.007956 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-logs" (OuterVolumeSpecName: "logs") pod "e7e68ec2-4fd9-458d-98b5-c77049a83c5f" (UID: "e7e68ec2-4fd9-458d-98b5-c77049a83c5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.008656 4787 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-logs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.012381 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-kube-api-access-bpsx9" (OuterVolumeSpecName: "kube-api-access-bpsx9") pod "e7e68ec2-4fd9-458d-98b5-c77049a83c5f" (UID: "e7e68ec2-4fd9-458d-98b5-c77049a83c5f"). InnerVolumeSpecName "kube-api-access-bpsx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.046413 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e68ec2-4fd9-458d-98b5-c77049a83c5f" (UID: "e7e68ec2-4fd9-458d-98b5-c77049a83c5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.061335 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-config-data" (OuterVolumeSpecName: "config-data") pod "e7e68ec2-4fd9-458d-98b5-c77049a83c5f" (UID: "e7e68ec2-4fd9-458d-98b5-c77049a83c5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.075849 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e7e68ec2-4fd9-458d-98b5-c77049a83c5f" (UID: "e7e68ec2-4fd9-458d-98b5-c77049a83c5f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.076744 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e7e68ec2-4fd9-458d-98b5-c77049a83c5f" (UID: "e7e68ec2-4fd9-458d-98b5-c77049a83c5f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.110899 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.111170 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.111181 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpsx9\" (UniqueName: \"kubernetes.io/projected/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-kube-api-access-bpsx9\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.111193 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.111201 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e68ec2-4fd9-458d-98b5-c77049a83c5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.772398 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.824123 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.840150 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.857298 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:33 crc kubenswrapper[4787]: E0219 19:44:33.858690 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-api" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.858905 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-api" Feb 19 19:44:33 crc kubenswrapper[4787]: E0219 19:44:33.858933 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-log" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.859049 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-log" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.859382 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-api" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.859416 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" containerName="nova-api-log" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.861114 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.863957 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.865043 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.865086 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.872489 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.936715 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.936786 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-config-data\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.936946 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9l2g\" (UniqueName: \"kubernetes.io/projected/a898d73d-04ec-4e21-bd5a-99e453f36d8e-kube-api-access-c9l2g\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.937035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.937057 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a898d73d-04ec-4e21-bd5a-99e453f36d8e-logs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:33 crc kubenswrapper[4787]: I0219 19:44:33.937073 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.038665 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.038717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a898d73d-04ec-4e21-bd5a-99e453f36d8e-logs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.038739 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.038801 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.038838 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-config-data\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.038939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9l2g\" (UniqueName: \"kubernetes.io/projected/a898d73d-04ec-4e21-bd5a-99e453f36d8e-kube-api-access-c9l2g\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.039592 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a898d73d-04ec-4e21-bd5a-99e453f36d8e-logs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.044071 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.044399 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-config-data\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.044525 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.045315 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a898d73d-04ec-4e21-bd5a-99e453f36d8e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.055704 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9l2g\" (UniqueName: \"kubernetes.io/projected/a898d73d-04ec-4e21-bd5a-99e453f36d8e-kube-api-access-c9l2g\") pod \"nova-api-0\" (UID: \"a898d73d-04ec-4e21-bd5a-99e453f36d8e\") " pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.121961 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.181813 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.659581 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 19:44:34 crc kubenswrapper[4787]: W0219 19:44:34.669854 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda898d73d_04ec_4e21_bd5a_99e453f36d8e.slice/crio-2d216d8e13f7630ef4d58774b46311f706a10fdc10ac291b70dcaf299c6872b9 WatchSource:0}: Error finding container 2d216d8e13f7630ef4d58774b46311f706a10fdc10ac291b70dcaf299c6872b9: Status 404 returned error can't find the container with id 2d216d8e13f7630ef4d58774b46311f706a10fdc10ac291b70dcaf299c6872b9 Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.787456 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a898d73d-04ec-4e21-bd5a-99e453f36d8e","Type":"ContainerStarted","Data":"2d216d8e13f7630ef4d58774b46311f706a10fdc10ac291b70dcaf299c6872b9"} Feb 19 19:44:34 crc kubenswrapper[4787]: I0219 19:44:34.906859 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e68ec2-4fd9-458d-98b5-c77049a83c5f" path="/var/lib/kubelet/pods/e7e68ec2-4fd9-458d-98b5-c77049a83c5f/volumes" Feb 19 19:44:35 crc kubenswrapper[4787]: I0219 19:44:35.801584 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a898d73d-04ec-4e21-bd5a-99e453f36d8e","Type":"ContainerStarted","Data":"a85c7abbbc470c546cda33dec3a21bb185b28d744dbb4dda7182617fba7f3bf5"} Feb 19 19:44:35 crc kubenswrapper[4787]: I0219 19:44:35.801916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a898d73d-04ec-4e21-bd5a-99e453f36d8e","Type":"ContainerStarted","Data":"6414ee258b8e693b99c6600e5f78ebbeba6ad7abb36c8c497db2383e54830b9e"} Feb 19 19:44:35 crc kubenswrapper[4787]: I0219 19:44:35.830109 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.830083097 podStartE2EDuration="2.830083097s" podCreationTimestamp="2026-02-19 19:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:44:35.824946411 +0000 UTC m=+1543.615612343" watchObservedRunningTime="2026-02-19 19:44:35.830083097 +0000 UTC m=+1543.620749089" Feb 19 19:44:36 crc kubenswrapper[4787]: I0219 19:44:36.187257 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9d018105-8445-48e5-b826-3991f7fa306f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.239:3000/\": dial tcp 10.217.0.239:3000: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:36 crc kubenswrapper[4787]: I0219 19:44:36.447689 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 19:44:36 crc kubenswrapper[4787]: I0219 19:44:36.447736 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.121206 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.157781 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.800822 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.853493 4787 generic.go:334] "Generic (PLEG): container finished" podID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerID="060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e" exitCode=137 Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.853630 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerDied","Data":"060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e"} Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.853684 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933","Type":"ContainerDied","Data":"665f493a2d92ef80c75389762293e64741375f6341ac228f2ad1b96504e82a0f"} Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.853705 4787 scope.go:117] "RemoveContainer" containerID="060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.853598 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.874994 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-combined-ca-bundle\") pod \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.875168 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-config-data\") pod \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.875295 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-scripts\") pod \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.875350 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbbsn\" (UniqueName: \"kubernetes.io/projected/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-kube-api-access-tbbsn\") pod \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\" (UID: \"5e2293f0-b4d0-414b-9c8f-7f9a75ac6933\") " Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.881718 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-scripts" (OuterVolumeSpecName: "scripts") pod "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" (UID: "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.892806 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:44:39 crc kubenswrapper[4787]: E0219 19:44:39.895126 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.896763 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-kube-api-access-tbbsn" (OuterVolumeSpecName: "kube-api-access-tbbsn") pod "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" (UID: "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933"). InnerVolumeSpecName "kube-api-access-tbbsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.899220 4787 scope.go:117] "RemoveContainer" containerID="65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.899394 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.978877 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:39 crc kubenswrapper[4787]: I0219 19:44:39.978909 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbbsn\" (UniqueName: \"kubernetes.io/projected/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-kube-api-access-tbbsn\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.031396 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" (UID: "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.034196 4787 scope.go:117] "RemoveContainer" containerID="28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.036213 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-config-data" (OuterVolumeSpecName: "config-data") pod "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" (UID: "5e2293f0-b4d0-414b-9c8f-7f9a75ac6933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.067926 4787 scope.go:117] "RemoveContainer" containerID="9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.081331 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.081366 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.092121 4787 scope.go:117] "RemoveContainer" containerID="060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.093471 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e\": container with ID starting with 060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e not found: ID does not exist" containerID="060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.093522 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e"} err="failed to get container status \"060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e\": rpc error: code = NotFound desc = could not find container \"060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e\": container with ID starting with 060bf65f5bfc53ce8c32f1f4e42bca496b8034332961cc4d949821d942b91e5e not found: ID does not exist" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.093558 4787 scope.go:117] "RemoveContainer" containerID="65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.093996 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9\": container with ID starting with 65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9 not found: ID does not exist" containerID="65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.094045 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9"} err="failed to get container status \"65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9\": rpc error: code = NotFound desc = could not find container \"65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9\": container with ID starting with 65d580798a9100d4fe7ef6b868aca2a13455339fcfc0f66b20c85360361202a9 not found: ID does not exist" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.094069 4787 scope.go:117] "RemoveContainer" containerID="28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.094392 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c\": container with ID starting with 28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c not found: ID does not exist" containerID="28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.094460 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c"} err="failed to get container status \"28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c\": rpc error: code = NotFound desc = could not find container \"28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c\": container with ID starting with 28f4a27b64b1c6c4a200b9960517347614714993c6305bfb7c34cb52ad78ac5c not found: ID does not exist" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.094531 4787 scope.go:117] "RemoveContainer" containerID="9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.095004 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183\": container with ID starting with 9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183 not found: ID does not exist" containerID="9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.095031 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183"} err="failed to get container status \"9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183\": rpc error: code = NotFound desc = could not find container \"9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183\": container with ID starting with 9ef83f189b28e37142155f8f3453927a9cb456ec6447b8f18f32f770bd9a1183 not found: ID does not exist" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.194903 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.213310 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.246428 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.260700 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-evaluator" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.260746 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-evaluator" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.260818 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-notifier" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.260827 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-notifier" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.260848 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-listener" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.260856 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-listener" Feb 19 19:44:40 crc kubenswrapper[4787]: E0219 19:44:40.260900 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-api" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.260908 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-api" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.261694 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-evaluator" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.261724 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-listener" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.261743 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-api" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.261760 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" containerName="aodh-notifier" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.288927 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.289152 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.293879 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-f2j2m" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.293983 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.294193 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.294208 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.294343 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.389475 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2jc\" (UniqueName: \"kubernetes.io/projected/70954191-d761-4466-8f3d-2e60d61d19de-kube-api-access-7d2jc\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.389894 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-scripts\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.389962 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.390176 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-internal-tls-certs\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.390799 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-config-data\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.390882 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-public-tls-certs\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.493056 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.493216 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-internal-tls-certs\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.493249 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-config-data\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.493274 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-public-tls-certs\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.493377 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2jc\" (UniqueName: \"kubernetes.io/projected/70954191-d761-4466-8f3d-2e60d61d19de-kube-api-access-7d2jc\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.493405 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-scripts\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.498439 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-combined-ca-bundle\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.499052 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-internal-tls-certs\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.499628 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-config-data\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.500795 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-public-tls-certs\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.513006 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-scripts\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.513316 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2jc\" (UniqueName: \"kubernetes.io/projected/70954191-d761-4466-8f3d-2e60d61d19de-kube-api-access-7d2jc\") pod \"aodh-0\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.669655 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:44:40 crc kubenswrapper[4787]: I0219 19:44:40.915302 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2293f0-b4d0-414b-9c8f-7f9a75ac6933" path="/var/lib/kubelet/pods/5e2293f0-b4d0-414b-9c8f-7f9a75ac6933/volumes" Feb 19 19:44:41 crc kubenswrapper[4787]: W0219 19:44:41.171677 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70954191_d761_4466_8f3d_2e60d61d19de.slice/crio-ff67b771de5b592e02f414bd1bd28048bd1728567609e40a21a74b2bb919808d WatchSource:0}: Error finding container ff67b771de5b592e02f414bd1bd28048bd1728567609e40a21a74b2bb919808d: Status 404 returned error can't find the container with id ff67b771de5b592e02f414bd1bd28048bd1728567609e40a21a74b2bb919808d Feb 19 19:44:41 crc kubenswrapper[4787]: I0219 19:44:41.176188 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 19:44:41 crc kubenswrapper[4787]: I0219 19:44:41.466198 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 19:44:41 crc kubenswrapper[4787]: I0219 19:44:41.466690 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 19:44:41 crc kubenswrapper[4787]: I0219 19:44:41.882077 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerStarted","Data":"ff67b771de5b592e02f414bd1bd28048bd1728567609e40a21a74b2bb919808d"} Feb 19 19:44:42 crc kubenswrapper[4787]: I0219 19:44:42.504964 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7a497801-b864-4180-bcc6-d6b7f3c5b35e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.255:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:42 crc kubenswrapper[4787]: I0219 19:44:42.504992 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7a497801-b864-4180-bcc6-d6b7f3c5b35e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.255:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:42 crc kubenswrapper[4787]: I0219 19:44:42.911793 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerStarted","Data":"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85"} Feb 19 19:44:43 crc kubenswrapper[4787]: I0219 19:44:43.919474 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerStarted","Data":"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769"} Feb 19 19:44:43 crc kubenswrapper[4787]: I0219 19:44:43.920046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerStarted","Data":"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639"} Feb 19 19:44:44 crc kubenswrapper[4787]: I0219 19:44:44.182364 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:44:44 crc kubenswrapper[4787]: I0219 19:44:44.182730 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 19:44:44 crc kubenswrapper[4787]: I0219 19:44:44.932917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerStarted","Data":"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49"} Feb 19 19:44:44 crc kubenswrapper[4787]: I0219 19:44:44.972287 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.09163262 podStartE2EDuration="4.972262068s" podCreationTimestamp="2026-02-19 19:44:40 +0000 UTC" firstStartedPulling="2026-02-19 19:44:41.174150454 +0000 UTC m=+1548.964816396" lastFinishedPulling="2026-02-19 19:44:44.054779902 +0000 UTC m=+1551.845445844" observedRunningTime="2026-02-19 19:44:44.96952516 +0000 UTC m=+1552.760191102" watchObservedRunningTime="2026-02-19 19:44:44.972262068 +0000 UTC m=+1552.762928020" Feb 19 19:44:45 crc kubenswrapper[4787]: I0219 19:44:45.204065 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a898d73d-04ec-4e21-bd5a-99e453f36d8e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:45 crc kubenswrapper[4787]: I0219 19:44:45.204181 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a898d73d-04ec-4e21-bd5a-99e453f36d8e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 19:44:48 crc kubenswrapper[4787]: I0219 19:44:48.271270 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 19:44:51 crc kubenswrapper[4787]: I0219 19:44:51.453245 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 19:44:51 crc kubenswrapper[4787]: I0219 19:44:51.454247 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 19:44:51 crc kubenswrapper[4787]: I0219 19:44:51.464532 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 19:44:52 crc kubenswrapper[4787]: I0219 19:44:52.018931 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 19:44:52 crc kubenswrapper[4787]: I0219 19:44:52.616448 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:44:52 crc kubenswrapper[4787]: I0219 19:44:52.617014 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" containerName="kube-state-metrics" containerID="cri-o://85354daa3da10b50cc1e08a7f157ecf02d9d1ea422d613d618253cf52e8f86b0" gracePeriod=30 Feb 19 19:44:52 crc kubenswrapper[4787]: E0219 19:44:52.762701 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15222b2_deb2_46d1_a58d_d58d78228940.slice/crio-85354daa3da10b50cc1e08a7f157ecf02d9d1ea422d613d618253cf52e8f86b0.scope\": RecentStats: unable to find data in memory cache]" Feb 19 19:44:52 crc kubenswrapper[4787]: I0219 19:44:52.835672 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:44:52 crc kubenswrapper[4787]: I0219 19:44:52.836222 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="fa69714c-41e5-4477-9267-303589d519de" containerName="mysqld-exporter" containerID="cri-o://7681af018ec623d62996ed62314b2dd958430251d52c68584ba58c84c988b18e" gracePeriod=30 Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.020986 4787 generic.go:334] "Generic (PLEG): container finished" podID="fa69714c-41e5-4477-9267-303589d519de" containerID="7681af018ec623d62996ed62314b2dd958430251d52c68584ba58c84c988b18e" exitCode=2 Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.021063 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fa69714c-41e5-4477-9267-303589d519de","Type":"ContainerDied","Data":"7681af018ec623d62996ed62314b2dd958430251d52c68584ba58c84c988b18e"} Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.023484 4787 generic.go:334] "Generic (PLEG): container finished" podID="a15222b2-deb2-46d1-a58d-d58d78228940" containerID="85354daa3da10b50cc1e08a7f157ecf02d9d1ea422d613d618253cf52e8f86b0" exitCode=2 Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.024571 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a15222b2-deb2-46d1-a58d-d58d78228940","Type":"ContainerDied","Data":"85354daa3da10b50cc1e08a7f157ecf02d9d1ea422d613d618253cf52e8f86b0"} Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.492286 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.665495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-config-data\") pod \"fa69714c-41e5-4477-9267-303589d519de\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.665566 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-combined-ca-bundle\") pod \"fa69714c-41e5-4477-9267-303589d519de\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.665828 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwx25\" (UniqueName: \"kubernetes.io/projected/fa69714c-41e5-4477-9267-303589d519de-kube-api-access-zwx25\") pod \"fa69714c-41e5-4477-9267-303589d519de\" (UID: \"fa69714c-41e5-4477-9267-303589d519de\") " Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.672466 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.677808 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa69714c-41e5-4477-9267-303589d519de-kube-api-access-zwx25" (OuterVolumeSpecName: "kube-api-access-zwx25") pod "fa69714c-41e5-4477-9267-303589d519de" (UID: "fa69714c-41e5-4477-9267-303589d519de"). InnerVolumeSpecName "kube-api-access-zwx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.717424 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa69714c-41e5-4477-9267-303589d519de" (UID: "fa69714c-41e5-4477-9267-303589d519de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.768438 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df9xw\" (UniqueName: \"kubernetes.io/projected/a15222b2-deb2-46d1-a58d-d58d78228940-kube-api-access-df9xw\") pod \"a15222b2-deb2-46d1-a58d-d58d78228940\" (UID: \"a15222b2-deb2-46d1-a58d-d58d78228940\") " Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.770074 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwx25\" (UniqueName: \"kubernetes.io/projected/fa69714c-41e5-4477-9267-303589d519de-kube-api-access-zwx25\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.770102 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.808057 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-config-data" (OuterVolumeSpecName: "config-data") pod "fa69714c-41e5-4477-9267-303589d519de" (UID: "fa69714c-41e5-4477-9267-303589d519de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.863684 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15222b2-deb2-46d1-a58d-d58d78228940-kube-api-access-df9xw" (OuterVolumeSpecName: "kube-api-access-df9xw") pod "a15222b2-deb2-46d1-a58d-d58d78228940" (UID: "a15222b2-deb2-46d1-a58d-d58d78228940"). InnerVolumeSpecName "kube-api-access-df9xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.872718 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df9xw\" (UniqueName: \"kubernetes.io/projected/a15222b2-deb2-46d1-a58d-d58d78228940-kube-api-access-df9xw\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.872750 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa69714c-41e5-4477-9267-303589d519de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:53 crc kubenswrapper[4787]: I0219 19:44:53.892597 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:44:53 crc kubenswrapper[4787]: E0219 19:44:53.892925 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.036060 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.036109 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"fa69714c-41e5-4477-9267-303589d519de","Type":"ContainerDied","Data":"54803c5b9b46dc569b1a5086febda0fcb02b92226d2d42c2ee2342222be01763"} Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.036154 4787 scope.go:117] "RemoveContainer" containerID="7681af018ec623d62996ed62314b2dd958430251d52c68584ba58c84c988b18e" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.038260 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.038777 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a15222b2-deb2-46d1-a58d-d58d78228940","Type":"ContainerDied","Data":"42c3060802ebf377be95e37d6cf0cfe485e867657daf6a306854baaecaf2d17d"} Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.083541 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.096519 4787 scope.go:117] "RemoveContainer" containerID="85354daa3da10b50cc1e08a7f157ecf02d9d1ea422d613d618253cf52e8f86b0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.108195 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.134515 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.157838 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.167113 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: E0219 19:44:54.167719 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa69714c-41e5-4477-9267-303589d519de" containerName="mysqld-exporter" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.167740 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa69714c-41e5-4477-9267-303589d519de" containerName="mysqld-exporter" Feb 19 19:44:54 crc kubenswrapper[4787]: E0219 19:44:54.167771 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" containerName="kube-state-metrics" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.167778 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" containerName="kube-state-metrics" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.167994 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" containerName="kube-state-metrics" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.168021 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa69714c-41e5-4477-9267-303589d519de" containerName="mysqld-exporter" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.168851 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.171503 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.171793 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.180704 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.193350 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.194035 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.197643 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.199194 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.199279 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.199366 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.201289 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.201816 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.209864 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.286555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf8gc\" (UniqueName: \"kubernetes.io/projected/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-api-access-qf8gc\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.286628 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.286808 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.286974 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.287147 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.287193 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-config-data\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.287273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.287322 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw72f\" (UniqueName: \"kubernetes.io/projected/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-kube-api-access-lw72f\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390178 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf8gc\" (UniqueName: \"kubernetes.io/projected/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-api-access-qf8gc\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390250 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390316 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390404 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390502 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390545 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-config-data\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390641 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.390722 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw72f\" (UniqueName: \"kubernetes.io/projected/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-kube-api-access-lw72f\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.395809 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.395967 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.396140 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.396781 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-config-data\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.397234 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.407076 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw72f\" (UniqueName: \"kubernetes.io/projected/be25b512-f3a6-4bdc-81a1-a4bcc1ef237e-kube-api-access-lw72f\") pod \"mysqld-exporter-0\" (UID: \"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e\") " pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.407096 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.425825 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf8gc\" (UniqueName: \"kubernetes.io/projected/e7a69404-5a15-40e5-bd22-faa4493739fa-kube-api-access-qf8gc\") pod \"kube-state-metrics-0\" (UID: \"e7a69404-5a15-40e5-bd22-faa4493739fa\") " pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.492651 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.525247 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.912470 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15222b2-deb2-46d1-a58d-d58d78228940" path="/var/lib/kubelet/pods/a15222b2-deb2-46d1-a58d-d58d78228940/volumes" Feb 19 19:44:54 crc kubenswrapper[4787]: I0219 19:44:54.915334 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa69714c-41e5-4477-9267-303589d519de" path="/var/lib/kubelet/pods/fa69714c-41e5-4477-9267-303589d519de/volumes" Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.021570 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.080664 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e","Type":"ContainerStarted","Data":"f1cc96e35aa5d634b33835738888006f716e08cb254a96aa6df5ba5cf3f2ed33"} Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.083109 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.093417 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.115465 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.182343 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.182640 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-central-agent" containerID="cri-o://4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9" gracePeriod=30 Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.182708 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="proxy-httpd" containerID="cri-o://0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a" gracePeriod=30 Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.182797 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-notification-agent" containerID="cri-o://b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af" gracePeriod=30 Feb 19 19:44:55 crc kubenswrapper[4787]: I0219 19:44:55.182785 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="sg-core" containerID="cri-o://17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f" gracePeriod=30 Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.091046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e7a69404-5a15-40e5-bd22-faa4493739fa","Type":"ContainerStarted","Data":"6a1d9acd1ee4f39d881a846351657aed56e82fff0de7ff523696a7c509b0b356"} Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.093030 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.093181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e7a69404-5a15-40e5-bd22-faa4493739fa","Type":"ContainerStarted","Data":"6fc52cf1db699c54c19001acbf74851a91cbf429045b60dcb348408198334060"} Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.094371 4787 generic.go:334] "Generic (PLEG): container finished" podID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerID="0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a" exitCode=0 Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.094862 4787 generic.go:334] "Generic (PLEG): container finished" podID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerID="17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f" exitCode=2 Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.094956 4787 generic.go:334] "Generic (PLEG): container finished" podID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerID="4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9" exitCode=0 Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.094410 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerDied","Data":"0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a"} Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.095235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerDied","Data":"17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f"} Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.095319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerDied","Data":"4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9"} Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.097254 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"be25b512-f3a6-4bdc-81a1-a4bcc1ef237e","Type":"ContainerStarted","Data":"49f5809e5552aca6a412ad852f1fcd4b2fb4fac2988b0879896caba2063e60f3"} Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.124999 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.5365332390000002 podStartE2EDuration="2.124980408s" podCreationTimestamp="2026-02-19 19:44:54 +0000 UTC" firstStartedPulling="2026-02-19 19:44:55.097092094 +0000 UTC m=+1562.887758036" lastFinishedPulling="2026-02-19 19:44:55.685539263 +0000 UTC m=+1563.476205205" observedRunningTime="2026-02-19 19:44:56.112194507 +0000 UTC m=+1563.902860449" watchObservedRunningTime="2026-02-19 19:44:56.124980408 +0000 UTC m=+1563.915646350" Feb 19 19:44:56 crc kubenswrapper[4787]: I0219 19:44:56.152546 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.4331407760000001 podStartE2EDuration="2.152528547s" podCreationTimestamp="2026-02-19 19:44:54 +0000 UTC" firstStartedPulling="2026-02-19 19:44:55.022707781 +0000 UTC m=+1562.813373713" lastFinishedPulling="2026-02-19 19:44:55.742095542 +0000 UTC m=+1563.532761484" observedRunningTime="2026-02-19 19:44:56.134047865 +0000 UTC m=+1563.924713807" watchObservedRunningTime="2026-02-19 19:44:56.152528547 +0000 UTC m=+1563.943194489" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.102179 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.137909 4787 generic.go:334] "Generic (PLEG): container finished" podID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerID="b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af" exitCode=0 Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.137971 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerDied","Data":"b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af"} Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.138001 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198","Type":"ContainerDied","Data":"546211aea248f1a04e2c1fe4daeb7b3860b2ecbd30bdcdd9e76d7ffb4970b512"} Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.138023 4787 scope.go:117] "RemoveContainer" containerID="0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.138181 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.181158 4787 scope.go:117] "RemoveContainer" containerID="17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.196325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-sg-core-conf-yaml\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.196456 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-run-httpd\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.196497 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcmxv\" (UniqueName: \"kubernetes.io/projected/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-kube-api-access-wcmxv\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.197429 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-scripts\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.197495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-log-httpd\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.197541 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-config-data\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.197657 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-combined-ca-bundle\") pod \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\" (UID: \"a5feca0c-04bc-4f79-8ec4-31ce3d9b5198\") " Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.197706 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.198031 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.198530 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.198557 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.206929 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-kube-api-access-wcmxv" (OuterVolumeSpecName: "kube-api-access-wcmxv") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "kube-api-access-wcmxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.220239 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-scripts" (OuterVolumeSpecName: "scripts") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.229066 4787 scope.go:117] "RemoveContainer" containerID="b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.251222 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.300363 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.300391 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcmxv\" (UniqueName: \"kubernetes.io/projected/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-kube-api-access-wcmxv\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.300402 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.305865 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.343891 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-config-data" (OuterVolumeSpecName: "config-data") pod "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" (UID: "a5feca0c-04bc-4f79-8ec4-31ce3d9b5198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.360870 4787 scope.go:117] "RemoveContainer" containerID="4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.402950 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.402991 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.417704 4787 scope.go:117] "RemoveContainer" containerID="0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.421818 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a\": container with ID starting with 0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a not found: ID does not exist" containerID="0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.421886 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a"} err="failed to get container status \"0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a\": rpc error: code = NotFound desc = could not find container \"0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a\": container with ID starting with 0a8aa9f707a1e8effc8171114d53fb835e1513c70e3ee08845b987d7fc355e7a not found: ID does not exist" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.421926 4787 scope.go:117] "RemoveContainer" containerID="17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.426045 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f\": container with ID starting with 17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f not found: ID does not exist" containerID="17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.426081 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f"} err="failed to get container status \"17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f\": rpc error: code = NotFound desc = could not find container \"17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f\": container with ID starting with 17e8c407e9b4bc707c3ef0a92420b4d0d1016f8d41b44a0ed0ccc868f3f56a6f not found: ID does not exist" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.426101 4787 scope.go:117] "RemoveContainer" containerID="b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.426944 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af\": container with ID starting with b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af not found: ID does not exist" containerID="b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.427001 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af"} err="failed to get container status \"b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af\": rpc error: code = NotFound desc = could not find container \"b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af\": container with ID starting with b26063a0cc7dad67c63a8a75190ddbe627eff1e3b6dcfec3fc6cbc97da1f00af not found: ID does not exist" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.427035 4787 scope.go:117] "RemoveContainer" containerID="4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.427447 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9\": container with ID starting with 4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9 not found: ID does not exist" containerID="4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.427485 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9"} err="failed to get container status \"4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9\": rpc error: code = NotFound desc = could not find container \"4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9\": container with ID starting with 4a80cb05f158e8f8a91da071de22a71d9932bc98e7e966b3f836b89ed19bf3f9 not found: ID does not exist" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.495868 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.507359 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.526978 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.531080 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="sg-core" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.531118 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="sg-core" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.531178 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-notification-agent" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.531187 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-notification-agent" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.531256 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-central-agent" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.531267 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-central-agent" Feb 19 19:44:58 crc kubenswrapper[4787]: E0219 19:44:58.531312 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="proxy-httpd" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.531320 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="proxy-httpd" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.532128 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="proxy-httpd" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.532172 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="sg-core" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.532203 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-central-agent" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.532214 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" containerName="ceilometer-notification-agent" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.548589 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.548793 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.552660 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.553101 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.553270 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.708982 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-log-httpd\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.709307 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.709672 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx2mj\" (UniqueName: \"kubernetes.io/projected/ba84758e-1995-4a24-9322-acb5c0f3b20f-kube-api-access-vx2mj\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.709792 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-scripts\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.709913 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.710074 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-run-httpd\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.710220 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-config-data\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.710363 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.812528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-config-data\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.813366 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.813553 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-log-httpd\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.813671 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.813928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx2mj\" (UniqueName: \"kubernetes.io/projected/ba84758e-1995-4a24-9322-acb5c0f3b20f-kube-api-access-vx2mj\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.814014 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-scripts\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.814114 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.814532 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-run-httpd\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.814318 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-log-httpd\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.814995 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-run-httpd\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.817404 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-scripts\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.818022 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.818366 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.819007 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-config-data\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.820289 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.831779 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx2mj\" (UniqueName: \"kubernetes.io/projected/ba84758e-1995-4a24-9322-acb5c0f3b20f-kube-api-access-vx2mj\") pod \"ceilometer-0\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.875809 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:44:58 crc kubenswrapper[4787]: I0219 19:44:58.911781 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5feca0c-04bc-4f79-8ec4-31ce3d9b5198" path="/var/lib/kubelet/pods/a5feca0c-04bc-4f79-8ec4-31ce3d9b5198/volumes" Feb 19 19:44:59 crc kubenswrapper[4787]: I0219 19:44:59.385841 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:44:59 crc kubenswrapper[4787]: W0219 19:44:59.386142 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba84758e_1995_4a24_9322_acb5c0f3b20f.slice/crio-91703d15d16b614b2b836d234bc6405bb5525b6432919cc241872af1b5f892f5 WatchSource:0}: Error finding container 91703d15d16b614b2b836d234bc6405bb5525b6432919cc241872af1b5f892f5: Status 404 returned error can't find the container with id 91703d15d16b614b2b836d234bc6405bb5525b6432919cc241872af1b5f892f5 Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.143149 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw"] Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.146053 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.148328 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.148494 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.156129 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw"] Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.176817 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerStarted","Data":"2ec9a268c7d019ea3f7790e6681cdf49f3ceffc6f6080255048f139e30219897"} Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.177137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerStarted","Data":"91703d15d16b614b2b836d234bc6405bb5525b6432919cc241872af1b5f892f5"} Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.246120 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vptm\" (UniqueName: \"kubernetes.io/projected/e79bad24-7b6a-46d9-8ee4-c710aba23e86-kube-api-access-5vptm\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.246233 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79bad24-7b6a-46d9-8ee4-c710aba23e86-config-volume\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.246344 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79bad24-7b6a-46d9-8ee4-c710aba23e86-secret-volume\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.353567 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vptm\" (UniqueName: \"kubernetes.io/projected/e79bad24-7b6a-46d9-8ee4-c710aba23e86-kube-api-access-5vptm\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.353679 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79bad24-7b6a-46d9-8ee4-c710aba23e86-config-volume\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.353816 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79bad24-7b6a-46d9-8ee4-c710aba23e86-secret-volume\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.354776 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79bad24-7b6a-46d9-8ee4-c710aba23e86-config-volume\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.357884 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79bad24-7b6a-46d9-8ee4-c710aba23e86-secret-volume\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.386190 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vptm\" (UniqueName: \"kubernetes.io/projected/e79bad24-7b6a-46d9-8ee4-c710aba23e86-kube-api-access-5vptm\") pod \"collect-profiles-29525505-t5vdw\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.475490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:00 crc kubenswrapper[4787]: I0219 19:45:00.958401 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw"] Feb 19 19:45:00 crc kubenswrapper[4787]: W0219 19:45:00.963711 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode79bad24_7b6a_46d9_8ee4_c710aba23e86.slice/crio-a393c40a3b288454a1cdee8e1e0dec1a06adca957611b8dfae8434fc04f9830e WatchSource:0}: Error finding container a393c40a3b288454a1cdee8e1e0dec1a06adca957611b8dfae8434fc04f9830e: Status 404 returned error can't find the container with id a393c40a3b288454a1cdee8e1e0dec1a06adca957611b8dfae8434fc04f9830e Feb 19 19:45:01 crc kubenswrapper[4787]: I0219 19:45:01.192324 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" event={"ID":"e79bad24-7b6a-46d9-8ee4-c710aba23e86","Type":"ContainerStarted","Data":"a393c40a3b288454a1cdee8e1e0dec1a06adca957611b8dfae8434fc04f9830e"} Feb 19 19:45:01 crc kubenswrapper[4787]: I0219 19:45:01.197088 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerStarted","Data":"bbc3e0fc56f92d759f37a13fdabb4aa6b2361fcb21e7dbcf545c8c3d606bb3d5"} Feb 19 19:45:02 crc kubenswrapper[4787]: I0219 19:45:02.209178 4787 generic.go:334] "Generic (PLEG): container finished" podID="e79bad24-7b6a-46d9-8ee4-c710aba23e86" containerID="3a84e047ab67813124193ff3588582449e9c8a05bfbf412bd82fc6bcb0a09317" exitCode=0 Feb 19 19:45:02 crc kubenswrapper[4787]: I0219 19:45:02.209331 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" event={"ID":"e79bad24-7b6a-46d9-8ee4-c710aba23e86","Type":"ContainerDied","Data":"3a84e047ab67813124193ff3588582449e9c8a05bfbf412bd82fc6bcb0a09317"} Feb 19 19:45:02 crc kubenswrapper[4787]: I0219 19:45:02.211697 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerStarted","Data":"250b38f6fd17bccd45971da382c0bbfef5c15de18bc2c8dbf77b60180ec1f2af"} Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.746747 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.848665 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vptm\" (UniqueName: \"kubernetes.io/projected/e79bad24-7b6a-46d9-8ee4-c710aba23e86-kube-api-access-5vptm\") pod \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.850783 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79bad24-7b6a-46d9-8ee4-c710aba23e86-config-volume\") pod \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.850812 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79bad24-7b6a-46d9-8ee4-c710aba23e86-secret-volume\") pod \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\" (UID: \"e79bad24-7b6a-46d9-8ee4-c710aba23e86\") " Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.853172 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79bad24-7b6a-46d9-8ee4-c710aba23e86-config-volume" (OuterVolumeSpecName: "config-volume") pod "e79bad24-7b6a-46d9-8ee4-c710aba23e86" (UID: "e79bad24-7b6a-46d9-8ee4-c710aba23e86"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.855264 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79bad24-7b6a-46d9-8ee4-c710aba23e86-kube-api-access-5vptm" (OuterVolumeSpecName: "kube-api-access-5vptm") pod "e79bad24-7b6a-46d9-8ee4-c710aba23e86" (UID: "e79bad24-7b6a-46d9-8ee4-c710aba23e86"). InnerVolumeSpecName "kube-api-access-5vptm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.855337 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79bad24-7b6a-46d9-8ee4-c710aba23e86-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e79bad24-7b6a-46d9-8ee4-c710aba23e86" (UID: "e79bad24-7b6a-46d9-8ee4-c710aba23e86"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.954000 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vptm\" (UniqueName: \"kubernetes.io/projected/e79bad24-7b6a-46d9-8ee4-c710aba23e86-kube-api-access-5vptm\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.954266 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e79bad24-7b6a-46d9-8ee4-c710aba23e86-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:03 crc kubenswrapper[4787]: I0219 19:45:03.954349 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e79bad24-7b6a-46d9-8ee4-c710aba23e86-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.240277 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" event={"ID":"e79bad24-7b6a-46d9-8ee4-c710aba23e86","Type":"ContainerDied","Data":"a393c40a3b288454a1cdee8e1e0dec1a06adca957611b8dfae8434fc04f9830e"} Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.240686 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a393c40a3b288454a1cdee8e1e0dec1a06adca957611b8dfae8434fc04f9830e" Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.240296 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw" Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.243495 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerStarted","Data":"5b01d76641b8e7fbba58a55a771f8b60bcc838488dc5f4c7365e7f5281a64a2d"} Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.244254 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.284034 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.824157999 podStartE2EDuration="6.284015497s" podCreationTimestamp="2026-02-19 19:44:58 +0000 UTC" firstStartedPulling="2026-02-19 19:44:59.389252711 +0000 UTC m=+1567.179918653" lastFinishedPulling="2026-02-19 19:45:03.849110209 +0000 UTC m=+1571.639776151" observedRunningTime="2026-02-19 19:45:04.265201085 +0000 UTC m=+1572.055867037" watchObservedRunningTime="2026-02-19 19:45:04.284015497 +0000 UTC m=+1572.074681439" Feb 19 19:45:04 crc kubenswrapper[4787]: I0219 19:45:04.559536 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 19:45:08 crc kubenswrapper[4787]: I0219 19:45:08.893146 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:45:08 crc kubenswrapper[4787]: E0219 19:45:08.893608 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:45:23 crc kubenswrapper[4787]: I0219 19:45:23.892514 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:45:23 crc kubenswrapper[4787]: E0219 19:45:23.893666 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:45:28 crc kubenswrapper[4787]: I0219 19:45:28.886273 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.745784 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xqh45"] Feb 19 19:45:29 crc kubenswrapper[4787]: E0219 19:45:29.746398 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79bad24-7b6a-46d9-8ee4-c710aba23e86" containerName="collect-profiles" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.746421 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79bad24-7b6a-46d9-8ee4-c710aba23e86" containerName="collect-profiles" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.746731 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79bad24-7b6a-46d9-8ee4-c710aba23e86" containerName="collect-profiles" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.748445 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.759160 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqh45"] Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.907789 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68jr\" (UniqueName: \"kubernetes.io/projected/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-kube-api-access-r68jr\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.908064 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-utilities\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:29 crc kubenswrapper[4787]: I0219 19:45:29.908151 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-catalog-content\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.010565 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-utilities\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.010790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-catalog-content\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.011752 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68jr\" (UniqueName: \"kubernetes.io/projected/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-kube-api-access-r68jr\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.012302 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-utilities\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.012408 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-catalog-content\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.042881 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68jr\" (UniqueName: \"kubernetes.io/projected/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-kube-api-access-r68jr\") pod \"redhat-marketplace-xqh45\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.086443 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:30 crc kubenswrapper[4787]: I0219 19:45:30.610043 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqh45"] Feb 19 19:45:31 crc kubenswrapper[4787]: I0219 19:45:31.567628 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerID="5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926" exitCode=0 Feb 19 19:45:31 crc kubenswrapper[4787]: I0219 19:45:31.568825 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerDied","Data":"5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926"} Feb 19 19:45:31 crc kubenswrapper[4787]: I0219 19:45:31.568917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerStarted","Data":"67ab0b3accc2f92061c755b0491e937ac18066e5fe9fc43d03a142ac702a3094"} Feb 19 19:45:31 crc kubenswrapper[4787]: I0219 19:45:31.570278 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:45:32 crc kubenswrapper[4787]: I0219 19:45:32.583810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerStarted","Data":"956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd"} Feb 19 19:45:33 crc kubenswrapper[4787]: I0219 19:45:33.601681 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerID="956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd" exitCode=0 Feb 19 19:45:33 crc kubenswrapper[4787]: I0219 19:45:33.601736 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerDied","Data":"956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd"} Feb 19 19:45:34 crc kubenswrapper[4787]: I0219 19:45:34.612832 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerStarted","Data":"c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5"} Feb 19 19:45:34 crc kubenswrapper[4787]: I0219 19:45:34.640359 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xqh45" podStartSLOduration=3.207243351 podStartE2EDuration="5.640340261s" podCreationTimestamp="2026-02-19 19:45:29 +0000 UTC" firstStartedPulling="2026-02-19 19:45:31.569791466 +0000 UTC m=+1599.360457448" lastFinishedPulling="2026-02-19 19:45:34.002888416 +0000 UTC m=+1601.793554358" observedRunningTime="2026-02-19 19:45:34.62935594 +0000 UTC m=+1602.420021882" watchObservedRunningTime="2026-02-19 19:45:34.640340261 +0000 UTC m=+1602.431006203" Feb 19 19:45:38 crc kubenswrapper[4787]: I0219 19:45:38.891781 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:45:38 crc kubenswrapper[4787]: E0219 19:45:38.892505 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.742246 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d8pm7"] Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.745027 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.768650 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8pm7"] Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.856406 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqws\" (UniqueName: \"kubernetes.io/projected/218fd09b-001e-43a3-bb12-fb7f7730baea-kube-api-access-wzqws\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.856752 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-utilities\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.856847 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-catalog-content\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.959199 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-catalog-content\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.959381 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqws\" (UniqueName: \"kubernetes.io/projected/218fd09b-001e-43a3-bb12-fb7f7730baea-kube-api-access-wzqws\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.959511 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-utilities\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.959694 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-catalog-content\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.959972 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-utilities\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:39 crc kubenswrapper[4787]: I0219 19:45:39.981458 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqws\" (UniqueName: \"kubernetes.io/projected/218fd09b-001e-43a3-bb12-fb7f7730baea-kube-api-access-wzqws\") pod \"community-operators-d8pm7\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.001199 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-9m8s6"] Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.012372 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-9m8s6"] Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.065855 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.087154 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.087228 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.112854 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-xq7tp"] Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.114881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.144566 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xq7tp"] Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.179429 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.268044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-config-data\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.268122 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsfn\" (UniqueName: \"kubernetes.io/projected/fb810906-81bd-42b7-9a2b-0900059baba9-kube-api-access-bvsfn\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.268260 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-combined-ca-bundle\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.370787 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsfn\" (UniqueName: \"kubernetes.io/projected/fb810906-81bd-42b7-9a2b-0900059baba9-kube-api-access-bvsfn\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.371272 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-combined-ca-bundle\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.371430 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-config-data\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.378468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-combined-ca-bundle\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.386401 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-config-data\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.396517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsfn\" (UniqueName: \"kubernetes.io/projected/fb810906-81bd-42b7-9a2b-0900059baba9-kube-api-access-bvsfn\") pod \"heat-db-sync-xq7tp\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.535764 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xq7tp" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.602227 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8pm7"] Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.698034 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerStarted","Data":"3d8fbff4295e3cfb353f7246d811af793705f1ceca1d30c6619b5854f0022c43"} Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.764847 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:40 crc kubenswrapper[4787]: I0219 19:45:40.913581 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2b70fa-1540-4748-8660-6d1fb44036fe" path="/var/lib/kubelet/pods/3d2b70fa-1540-4748-8660-6d1fb44036fe/volumes" Feb 19 19:45:41 crc kubenswrapper[4787]: I0219 19:45:41.018001 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xq7tp"] Feb 19 19:45:41 crc kubenswrapper[4787]: I0219 19:45:41.713688 4787 generic.go:334] "Generic (PLEG): container finished" podID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerID="c77ab1ada1a9a0a622de8eb8061f0820eb39c64157f3191a406aa33a87ff1af4" exitCode=0 Feb 19 19:45:41 crc kubenswrapper[4787]: I0219 19:45:41.713774 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerDied","Data":"c77ab1ada1a9a0a622de8eb8061f0820eb39c64157f3191a406aa33a87ff1af4"} Feb 19 19:45:41 crc kubenswrapper[4787]: I0219 19:45:41.721380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xq7tp" event={"ID":"fb810906-81bd-42b7-9a2b-0900059baba9","Type":"ContainerStarted","Data":"226fc085ba3fedf7ca263fd728bf5f97dd23bc49397d71df56920c128d445f5d"} Feb 19 19:45:42 crc kubenswrapper[4787]: I0219 19:45:42.052216 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:45:42 crc kubenswrapper[4787]: I0219 19:45:42.924790 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqh45"] Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.196505 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.478181 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.478825 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-central-agent" containerID="cri-o://2ec9a268c7d019ea3f7790e6681cdf49f3ceffc6f6080255048f139e30219897" gracePeriod=30 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.478865 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="proxy-httpd" containerID="cri-o://5b01d76641b8e7fbba58a55a771f8b60bcc838488dc5f4c7365e7f5281a64a2d" gracePeriod=30 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.478964 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="sg-core" containerID="cri-o://250b38f6fd17bccd45971da382c0bbfef5c15de18bc2c8dbf77b60180ec1f2af" gracePeriod=30 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.479020 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-notification-agent" containerID="cri-o://bbc3e0fc56f92d759f37a13fdabb4aa6b2361fcb21e7dbcf545c8c3d606bb3d5" gracePeriod=30 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.742917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerStarted","Data":"1aad18d1fb84f4db9bb63102e3b11da22fdc18ae9e63d1cf56ccc5b2be39ab53"} Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.773165 4787 generic.go:334] "Generic (PLEG): container finished" podID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerID="5b01d76641b8e7fbba58a55a771f8b60bcc838488dc5f4c7365e7f5281a64a2d" exitCode=0 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.773201 4787 generic.go:334] "Generic (PLEG): container finished" podID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerID="250b38f6fd17bccd45971da382c0bbfef5c15de18bc2c8dbf77b60180ec1f2af" exitCode=2 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.773398 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xqh45" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="registry-server" containerID="cri-o://c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5" gracePeriod=2 Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.773643 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerDied","Data":"5b01d76641b8e7fbba58a55a771f8b60bcc838488dc5f4c7365e7f5281a64a2d"} Feb 19 19:45:43 crc kubenswrapper[4787]: I0219 19:45:43.773672 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerDied","Data":"250b38f6fd17bccd45971da382c0bbfef5c15de18bc2c8dbf77b60180ec1f2af"} Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.514534 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.591282 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-utilities\") pod \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.591381 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r68jr\" (UniqueName: \"kubernetes.io/projected/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-kube-api-access-r68jr\") pod \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.591672 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-catalog-content\") pod \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\" (UID: \"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7\") " Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.592972 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-utilities" (OuterVolumeSpecName: "utilities") pod "bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" (UID: "bf90eb61-be80-4d5a-ae04-e2fe389e4bb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.602875 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-kube-api-access-r68jr" (OuterVolumeSpecName: "kube-api-access-r68jr") pod "bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" (UID: "bf90eb61-be80-4d5a-ae04-e2fe389e4bb7"). InnerVolumeSpecName "kube-api-access-r68jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.633887 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" (UID: "bf90eb61-be80-4d5a-ae04-e2fe389e4bb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.694450 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.694793 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.694806 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r68jr\" (UniqueName: \"kubernetes.io/projected/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7-kube-api-access-r68jr\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.799181 4787 generic.go:334] "Generic (PLEG): container finished" podID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerID="2ec9a268c7d019ea3f7790e6681cdf49f3ceffc6f6080255048f139e30219897" exitCode=0 Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.799248 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerDied","Data":"2ec9a268c7d019ea3f7790e6681cdf49f3ceffc6f6080255048f139e30219897"} Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.802203 4787 generic.go:334] "Generic (PLEG): container finished" podID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerID="c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5" exitCode=0 Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.802297 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqh45" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.802370 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerDied","Data":"c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5"} Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.802406 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqh45" event={"ID":"bf90eb61-be80-4d5a-ae04-e2fe389e4bb7","Type":"ContainerDied","Data":"67ab0b3accc2f92061c755b0491e937ac18066e5fe9fc43d03a142ac702a3094"} Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.802446 4787 scope.go:117] "RemoveContainer" containerID="c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.838892 4787 scope.go:117] "RemoveContainer" containerID="956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.844689 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqh45"] Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.863344 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqh45"] Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.910805 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" path="/var/lib/kubelet/pods/bf90eb61-be80-4d5a-ae04-e2fe389e4bb7/volumes" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.926452 4787 scope.go:117] "RemoveContainer" containerID="5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926" Feb 19 19:45:44 crc kubenswrapper[4787]: I0219 19:45:44.999700 4787 scope.go:117] "RemoveContainer" containerID="c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5" Feb 19 19:45:45 crc kubenswrapper[4787]: E0219 19:45:45.005383 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5\": container with ID starting with c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5 not found: ID does not exist" containerID="c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5" Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.005431 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5"} err="failed to get container status \"c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5\": rpc error: code = NotFound desc = could not find container \"c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5\": container with ID starting with c2c50ab66bc8cc45a0121621cb3804127d19566eec2e8ee9c03abc965ce77eb5 not found: ID does not exist" Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.005456 4787 scope.go:117] "RemoveContainer" containerID="956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd" Feb 19 19:45:45 crc kubenswrapper[4787]: E0219 19:45:45.006921 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd\": container with ID starting with 956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd not found: ID does not exist" containerID="956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd" Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.006949 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd"} err="failed to get container status \"956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd\": rpc error: code = NotFound desc = could not find container \"956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd\": container with ID starting with 956f1cf7631f334d3db7abd1c5f81ecda552e2ee3a777b8434a379c88b35e7cd not found: ID does not exist" Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.006967 4787 scope.go:117] "RemoveContainer" containerID="5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926" Feb 19 19:45:45 crc kubenswrapper[4787]: E0219 19:45:45.011090 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926\": container with ID starting with 5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926 not found: ID does not exist" containerID="5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926" Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.011138 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926"} err="failed to get container status \"5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926\": rpc error: code = NotFound desc = could not find container \"5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926\": container with ID starting with 5c558c76ec46029f57292a920c8c60168a64e58897b68e9f1775cd37ee043926 not found: ID does not exist" Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.820442 4787 generic.go:334] "Generic (PLEG): container finished" podID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerID="1aad18d1fb84f4db9bb63102e3b11da22fdc18ae9e63d1cf56ccc5b2be39ab53" exitCode=0 Feb 19 19:45:45 crc kubenswrapper[4787]: I0219 19:45:45.820801 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerDied","Data":"1aad18d1fb84f4db9bb63102e3b11da22fdc18ae9e63d1cf56ccc5b2be39ab53"} Feb 19 19:45:47 crc kubenswrapper[4787]: I0219 19:45:47.850541 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerStarted","Data":"c318eb17cbaba566429d882ec0de4a7fe9adefdfb63c306717b0a0d4a73897cc"} Feb 19 19:45:47 crc kubenswrapper[4787]: I0219 19:45:47.874098 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d8pm7" podStartSLOduration=3.551347895 podStartE2EDuration="8.874077651s" podCreationTimestamp="2026-02-19 19:45:39 +0000 UTC" firstStartedPulling="2026-02-19 19:45:41.718624878 +0000 UTC m=+1609.509290820" lastFinishedPulling="2026-02-19 19:45:47.041354634 +0000 UTC m=+1614.832020576" observedRunningTime="2026-02-19 19:45:47.873087713 +0000 UTC m=+1615.663753655" watchObservedRunningTime="2026-02-19 19:45:47.874077651 +0000 UTC m=+1615.664743593" Feb 19 19:45:48 crc kubenswrapper[4787]: I0219 19:45:48.159114 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="rabbitmq" containerID="cri-o://409d0d347d59a437026f283198fe2e5aeafaf1b69b9ca6360c526effd05789dc" gracePeriod=604796 Feb 19 19:45:48 crc kubenswrapper[4787]: I0219 19:45:48.335522 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="rabbitmq" containerID="cri-o://07746b23ec7ee77675eb24404beadeb71529a82afdac8aa4c22ed541dcd36713" gracePeriod=604794 Feb 19 19:45:48 crc kubenswrapper[4787]: I0219 19:45:48.871181 4787 generic.go:334] "Generic (PLEG): container finished" podID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerID="bbc3e0fc56f92d759f37a13fdabb4aa6b2361fcb21e7dbcf545c8c3d606bb3d5" exitCode=0 Feb 19 19:45:48 crc kubenswrapper[4787]: I0219 19:45:48.871267 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerDied","Data":"bbc3e0fc56f92d759f37a13fdabb4aa6b2361fcb21e7dbcf545c8c3d606bb3d5"} Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.080961 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.341336 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421272 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx2mj\" (UniqueName: \"kubernetes.io/projected/ba84758e-1995-4a24-9322-acb5c0f3b20f-kube-api-access-vx2mj\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421540 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-sg-core-conf-yaml\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421595 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-ceilometer-tls-certs\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421637 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-combined-ca-bundle\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421751 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-config-data\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421805 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-scripts\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421923 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-log-httpd\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.421947 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-run-httpd\") pod \"ba84758e-1995-4a24-9322-acb5c0f3b20f\" (UID: \"ba84758e-1995-4a24-9322-acb5c0f3b20f\") " Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.422678 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.422926 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.437890 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba84758e-1995-4a24-9322-acb5c0f3b20f-kube-api-access-vx2mj" (OuterVolumeSpecName: "kube-api-access-vx2mj") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "kube-api-access-vx2mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.445792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-scripts" (OuterVolumeSpecName: "scripts") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.446321 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.527163 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.527191 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba84758e-1995-4a24-9322-acb5c0f3b20f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.527201 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx2mj\" (UniqueName: \"kubernetes.io/projected/ba84758e-1995-4a24-9322-acb5c0f3b20f-kube-api-access-vx2mj\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.527211 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.569899 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.602369 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.629507 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.629546 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.642777 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.732832 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.743754 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-config-data" (OuterVolumeSpecName: "config-data") pod "ba84758e-1995-4a24-9322-acb5c0f3b20f" (UID: "ba84758e-1995-4a24-9322-acb5c0f3b20f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.835559 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba84758e-1995-4a24-9322-acb5c0f3b20f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.885709 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba84758e-1995-4a24-9322-acb5c0f3b20f","Type":"ContainerDied","Data":"91703d15d16b614b2b836d234bc6405bb5525b6432919cc241872af1b5f892f5"} Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.885766 4787 scope.go:117] "RemoveContainer" containerID="5b01d76641b8e7fbba58a55a771f8b60bcc838488dc5f4c7365e7f5281a64a2d" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.885793 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.892419 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.892724 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.939283 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.947587 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.973589 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974145 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="proxy-httpd" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974164 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="proxy-httpd" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974179 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="extract-utilities" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974186 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="extract-utilities" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974200 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="sg-core" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974206 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="sg-core" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974236 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-central-agent" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974242 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-central-agent" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974252 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-notification-agent" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974258 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-notification-agent" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974272 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="extract-content" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974277 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="extract-content" Feb 19 19:45:49 crc kubenswrapper[4787]: E0219 19:45:49.974288 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="registry-server" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974294 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="registry-server" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974494 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="sg-core" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974517 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-central-agent" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974527 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf90eb61-be80-4d5a-ae04-e2fe389e4bb7" containerName="registry-server" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974542 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="ceilometer-notification-agent" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.974551 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" containerName="proxy-httpd" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.976657 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.980444 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.980656 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.980826 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 19:45:49 crc kubenswrapper[4787]: I0219 19:45:49.994336 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045062 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045182 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-run-httpd\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045209 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmnz\" (UniqueName: \"kubernetes.io/projected/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-kube-api-access-5tmnz\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045261 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-log-httpd\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045281 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045304 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045325 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-scripts\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.045376 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-config-data\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.066870 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.067120 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.119733 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147721 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-run-httpd\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147777 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmnz\" (UniqueName: \"kubernetes.io/projected/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-kube-api-access-5tmnz\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147848 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-log-httpd\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147874 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147895 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147919 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-scripts\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.147973 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-config-data\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.148069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.148556 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-run-httpd\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.148713 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-log-httpd\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.157978 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.163972 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.165588 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-scripts\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.165796 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.168922 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-config-data\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.170629 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmnz\" (UniqueName: \"kubernetes.io/projected/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-kube-api-access-5tmnz\") pod \"ceilometer-0\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.307041 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 19:45:50 crc kubenswrapper[4787]: I0219 19:45:50.914820 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba84758e-1995-4a24-9322-acb5c0f3b20f" path="/var/lib/kubelet/pods/ba84758e-1995-4a24-9322-acb5c0f3b20f/volumes" Feb 19 19:45:54 crc kubenswrapper[4787]: I0219 19:45:54.971123 4787 generic.go:334] "Generic (PLEG): container finished" podID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerID="07746b23ec7ee77675eb24404beadeb71529a82afdac8aa4c22ed541dcd36713" exitCode=0 Feb 19 19:45:54 crc kubenswrapper[4787]: I0219 19:45:54.972725 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"278d26c1-8a7c-4278-b84c-0c0c24d81f52","Type":"ContainerDied","Data":"07746b23ec7ee77675eb24404beadeb71529a82afdac8aa4c22ed541dcd36713"} Feb 19 19:45:54 crc kubenswrapper[4787]: I0219 19:45:54.976703 4787 generic.go:334] "Generic (PLEG): container finished" podID="769a015d-4883-474b-a4e8-45a2b77f2412" containerID="409d0d347d59a437026f283198fe2e5aeafaf1b69b9ca6360c526effd05789dc" exitCode=0 Feb 19 19:45:54 crc kubenswrapper[4787]: I0219 19:45:54.976759 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"769a015d-4883-474b-a4e8-45a2b77f2412","Type":"ContainerDied","Data":"409d0d347d59a437026f283198fe2e5aeafaf1b69b9ca6360c526effd05789dc"} Feb 19 19:45:55 crc kubenswrapper[4787]: I0219 19:45:55.030061 4787 scope.go:117] "RemoveContainer" containerID="250b38f6fd17bccd45971da382c0bbfef5c15de18bc2c8dbf77b60180ec1f2af" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.042728 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"769a015d-4883-474b-a4e8-45a2b77f2412","Type":"ContainerDied","Data":"07e62df1b8434db679bd9ec8727b86c0ce349aee9c68debcda4cad54b2cfa938"} Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.043372 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e62df1b8434db679bd9ec8727b86c0ce349aee9c68debcda4cad54b2cfa938" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.068435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"278d26c1-8a7c-4278-b84c-0c0c24d81f52","Type":"ContainerDied","Data":"8ae3fda264dc085e1b0a5b18a2be762d7d4c5dbc7e2fe29dffa5117d50f7fa32"} Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.068588 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae3fda264dc085e1b0a5b18a2be762d7d4c5dbc7e2fe29dffa5117d50f7fa32" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.071143 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.073215 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.163995 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-tls\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164041 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/769a015d-4883-474b-a4e8-45a2b77f2412-pod-info\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164101 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/278d26c1-8a7c-4278-b84c-0c0c24d81f52-erlang-cookie-secret\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164142 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-confd\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164213 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-erlang-cookie\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164234 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-plugins-conf\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164248 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt65m\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-kube-api-access-kt65m\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164271 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-tls\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164308 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-server-conf\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164341 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-plugins-conf\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164374 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-confd\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-config-data\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164440 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-erlang-cookie\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164474 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/769a015d-4883-474b-a4e8-45a2b77f2412-erlang-cookie-secret\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.164540 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-config-data\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.167309 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.170708 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174014 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174072 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d554l\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-kube-api-access-d554l\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174100 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/278d26c1-8a7c-4278-b84c-0c0c24d81f52-pod-info\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174135 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-plugins\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174166 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-plugins\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174186 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-server-conf\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.174514 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.176247 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"769a015d-4883-474b-a4e8-45a2b77f2412\" (UID: \"769a015d-4883-474b-a4e8-45a2b77f2412\") " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.177593 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.177628 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.177638 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.183700 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.187824 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.193845 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278d26c1-8a7c-4278-b84c-0c0c24d81f52-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.195705 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.196921 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.197095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/769a015d-4883-474b-a4e8-45a2b77f2412-pod-info" (OuterVolumeSpecName: "pod-info") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.205986 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-kube-api-access-kt65m" (OuterVolumeSpecName: "kube-api-access-kt65m") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "kube-api-access-kt65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.232283 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/278d26c1-8a7c-4278-b84c-0c0c24d81f52-pod-info" (OuterVolumeSpecName: "pod-info") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.232990 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.234873 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/769a015d-4883-474b-a4e8-45a2b77f2412-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.242857 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-kube-api-access-d554l" (OuterVolumeSpecName: "kube-api-access-d554l") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "kube-api-access-d554l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.244657 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-config-data" (OuterVolumeSpecName: "config-data") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.258237 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-config-data" (OuterVolumeSpecName: "config-data") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280185 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d554l\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-kube-api-access-d554l\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280212 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/278d26c1-8a7c-4278-b84c-0c0c24d81f52-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280225 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280238 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280249 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280259 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/769a015d-4883-474b-a4e8-45a2b77f2412-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280269 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/278d26c1-8a7c-4278-b84c-0c0c24d81f52-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280279 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280290 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt65m\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-kube-api-access-kt65m\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280300 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280310 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280321 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/769a015d-4883-474b-a4e8-45a2b77f2412-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.280335 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.407261 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-server-conf" (OuterVolumeSpecName: "server-conf") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.485732 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/278d26c1-8a7c-4278-b84c-0c0c24d81f52-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.487392 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-server-conf" (OuterVolumeSpecName: "server-conf") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.594779 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/769a015d-4883-474b-a4e8-45a2b77f2412-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.614893 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.620534 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-k45lq"] Feb 19 19:45:58 crc kubenswrapper[4787]: E0219 19:45:58.625761 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="rabbitmq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.625796 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="rabbitmq" Feb 19 19:45:58 crc kubenswrapper[4787]: E0219 19:45:58.625834 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="setup-container" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.625841 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="setup-container" Feb 19 19:45:58 crc kubenswrapper[4787]: E0219 19:45:58.625858 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="rabbitmq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.625865 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="rabbitmq" Feb 19 19:45:58 crc kubenswrapper[4787]: E0219 19:45:58.625880 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="setup-container" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.625889 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="setup-container" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.626300 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" containerName="rabbitmq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.626327 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" containerName="rabbitmq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.627715 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.631869 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.648063 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.657739 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-k45lq"] Feb 19 19:45:58 crc kubenswrapper[4787]: E0219 19:45:58.668996 4787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929 podName:278d26c1-8a7c-4278-b84c-0c0c24d81f52 nodeName:}" failed. No retries permitted until 2026-02-19 19:45:59.168972331 +0000 UTC m=+1626.959638273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696518 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59zq\" (UniqueName: \"kubernetes.io/projected/ea956f9c-808d-4a82-88e9-83cc34c223c2-kube-api-access-w59zq\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696670 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-config\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696716 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696853 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696880 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696923 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.696949 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.697071 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/769a015d-4883-474b-a4e8-45a2b77f2412-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.697087 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/278d26c1-8a7c-4278-b84c-0c0c24d81f52-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803188 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-config\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803243 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803392 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803412 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803453 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.803577 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59zq\" (UniqueName: \"kubernetes.io/projected/ea956f9c-808d-4a82-88e9-83cc34c223c2-kube-api-access-w59zq\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.804412 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.806727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-config\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.806901 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.807320 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.809739 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.809792 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.826145 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f" (OuterVolumeSpecName: "persistence") pod "769a015d-4883-474b-a4e8-45a2b77f2412" (UID: "769a015d-4883-474b-a4e8-45a2b77f2412"). InnerVolumeSpecName "pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.827203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59zq\" (UniqueName: \"kubernetes.io/projected/ea956f9c-808d-4a82-88e9-83cc34c223c2-kube-api-access-w59zq\") pod \"dnsmasq-dns-7d84b4d45c-k45lq\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.908698 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") on node \"crc\" " Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.956061 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.956226 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f") on node "crc" Feb 19 19:45:58 crc kubenswrapper[4787]: I0219 19:45:58.967502 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.015000 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.083394 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.083439 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.119895 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.138724 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.165907 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.171769 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.178906 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.179258 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.179491 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.179676 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.179793 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.179911 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ff257" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.180061 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.180102 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.221421 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\" (UID: \"278d26c1-8a7c-4278-b84c-0c0c24d81f52\") " Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222090 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222157 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222187 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222233 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222278 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222313 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222368 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eebe8011-08bc-437a-89d5-f7aecaedceb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222467 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eebe8011-08bc-437a-89d5-f7aecaedceb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222549 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz94r\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-kube-api-access-kz94r\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222584 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.222674 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325307 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz94r\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-kube-api-access-kz94r\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325355 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325400 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325484 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325503 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325535 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325565 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325588 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325624 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eebe8011-08bc-437a-89d5-f7aecaedceb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.325699 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eebe8011-08bc-437a-89d5-f7aecaedceb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.326372 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.326502 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.326793 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eebe8011-08bc-437a-89d5-f7aecaedceb5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.327068 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.330413 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.331478 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.335347 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eebe8011-08bc-437a-89d5-f7aecaedceb5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.335357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eebe8011-08bc-437a-89d5-f7aecaedceb5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.335368 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.344706 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.344757 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/559af49a0292dcec9d08b73ac569dce92007fe855ff53d32c237c0ba151f0ca4/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.345784 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929" (OuterVolumeSpecName: "persistence") pod "278d26c1-8a7c-4278-b84c-0c0c24d81f52" (UID: "278d26c1-8a7c-4278-b84c-0c0c24d81f52"). InnerVolumeSpecName "pvc-bee7f173-cf88-4aae-b180-5a3751923929". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.346924 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz94r\" (UniqueName: \"kubernetes.io/projected/eebe8011-08bc-437a-89d5-f7aecaedceb5-kube-api-access-kz94r\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.417755 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d09b660-7231-4484-8ea5-bf53b2db8a9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"eebe8011-08bc-437a-89d5-f7aecaedceb5\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.427569 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") on node \"crc\" " Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.474041 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.481533 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.481857 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bee7f173-cf88-4aae-b180-5a3751923929" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929") on node "crc" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.503877 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.505687 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.524718 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.527017 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.529498 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") on node \"crc\" DevicePath \"\"" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.536977 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631378 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631458 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631542 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcvs\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-kube-api-access-xmcvs\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631574 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631622 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631659 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631703 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631783 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.631864 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-config-data\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733681 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733727 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmcvs\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-kube-api-access-xmcvs\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733747 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733783 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733801 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733818 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733864 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733945 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.733985 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-config-data\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.734019 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.734069 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.734240 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.735077 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.735369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-server-conf\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.735597 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-config-data\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.736244 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.736751 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.736777 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/074ae6b187beeece20d2cfb5ff5c72683c1851611c6a5bac612c514c4d6bbc9e/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.738082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.739476 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-pod-info\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.740061 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.742697 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.769435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmcvs\" (UniqueName: \"kubernetes.io/projected/8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef-kube-api-access-xmcvs\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.801340 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bee7f173-cf88-4aae-b180-5a3751923929\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bee7f173-cf88-4aae-b180-5a3751923929\") pod \"rabbitmq-server-2\" (UID: \"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef\") " pod="openstack/rabbitmq-server-2" Feb 19 19:45:59 crc kubenswrapper[4787]: I0219 19:45:59.861740 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 19 19:46:00 crc kubenswrapper[4787]: I0219 19:46:00.130534 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:46:00 crc kubenswrapper[4787]: I0219 19:46:00.183479 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8pm7"] Feb 19 19:46:00 crc kubenswrapper[4787]: I0219 19:46:00.910434 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278d26c1-8a7c-4278-b84c-0c0c24d81f52" path="/var/lib/kubelet/pods/278d26c1-8a7c-4278-b84c-0c0c24d81f52/volumes" Feb 19 19:46:00 crc kubenswrapper[4787]: I0219 19:46:00.911835 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="769a015d-4883-474b-a4e8-45a2b77f2412" path="/var/lib/kubelet/pods/769a015d-4883-474b-a4e8-45a2b77f2412/volumes" Feb 19 19:46:01 crc kubenswrapper[4787]: I0219 19:46:01.104309 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d8pm7" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="registry-server" containerID="cri-o://c318eb17cbaba566429d882ec0de4a7fe9adefdfb63c306717b0a0d4a73897cc" gracePeriod=2 Feb 19 19:46:02 crc kubenswrapper[4787]: I0219 19:46:02.118156 4787 generic.go:334] "Generic (PLEG): container finished" podID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerID="c318eb17cbaba566429d882ec0de4a7fe9adefdfb63c306717b0a0d4a73897cc" exitCode=0 Feb 19 19:46:02 crc kubenswrapper[4787]: I0219 19:46:02.118198 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerDied","Data":"c318eb17cbaba566429d882ec0de4a7fe9adefdfb63c306717b0a0d4a73897cc"} Feb 19 19:46:03 crc kubenswrapper[4787]: I0219 19:46:03.586124 4787 scope.go:117] "RemoveContainer" containerID="bbc3e0fc56f92d759f37a13fdabb4aa6b2361fcb21e7dbcf545c8c3d606bb3d5" Feb 19 19:46:04 crc kubenswrapper[4787]: E0219 19:46:04.108660 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 19 19:46:04 crc kubenswrapper[4787]: E0219 19:46:04.109025 4787 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 19 19:46:04 crc kubenswrapper[4787]: E0219 19:46:04.109159 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvsfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-xq7tp_openstack(fb810906-81bd-42b7-9a2b-0900059baba9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 19:46:04 crc kubenswrapper[4787]: E0219 19:46:04.110430 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-xq7tp" podUID="fb810906-81bd-42b7-9a2b-0900059baba9" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.136556 4787 scope.go:117] "RemoveContainer" containerID="2ec9a268c7d019ea3f7790e6681cdf49f3ceffc6f6080255048f139e30219897" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.151752 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8pm7" event={"ID":"218fd09b-001e-43a3-bb12-fb7f7730baea","Type":"ContainerDied","Data":"3d8fbff4295e3cfb353f7246d811af793705f1ceca1d30c6619b5854f0022c43"} Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.152020 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8fbff4295e3cfb353f7246d811af793705f1ceca1d30c6619b5854f0022c43" Feb 19 19:46:04 crc kubenswrapper[4787]: E0219 19:46:04.191534 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-xq7tp" podUID="fb810906-81bd-42b7-9a2b-0900059baba9" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.226889 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.306290 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.380636 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqws\" (UniqueName: \"kubernetes.io/projected/218fd09b-001e-43a3-bb12-fb7f7730baea-kube-api-access-wzqws\") pod \"218fd09b-001e-43a3-bb12-fb7f7730baea\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.381040 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-catalog-content\") pod \"218fd09b-001e-43a3-bb12-fb7f7730baea\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.381077 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-utilities\") pod \"218fd09b-001e-43a3-bb12-fb7f7730baea\" (UID: \"218fd09b-001e-43a3-bb12-fb7f7730baea\") " Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.382697 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-utilities" (OuterVolumeSpecName: "utilities") pod "218fd09b-001e-43a3-bb12-fb7f7730baea" (UID: "218fd09b-001e-43a3-bb12-fb7f7730baea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.393932 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218fd09b-001e-43a3-bb12-fb7f7730baea-kube-api-access-wzqws" (OuterVolumeSpecName: "kube-api-access-wzqws") pod "218fd09b-001e-43a3-bb12-fb7f7730baea" (UID: "218fd09b-001e-43a3-bb12-fb7f7730baea"). InnerVolumeSpecName "kube-api-access-wzqws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.439481 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "218fd09b-001e-43a3-bb12-fb7f7730baea" (UID: "218fd09b-001e-43a3-bb12-fb7f7730baea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.484962 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqws\" (UniqueName: \"kubernetes.io/projected/218fd09b-001e-43a3-bb12-fb7f7730baea-kube-api-access-wzqws\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.484993 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.485004 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/218fd09b-001e-43a3-bb12-fb7f7730baea-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.893832 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:46:04 crc kubenswrapper[4787]: E0219 19:46:04.894167 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:46:04 crc kubenswrapper[4787]: I0219 19:46:04.992864 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 19:46:05 crc kubenswrapper[4787]: W0219 19:46:05.026163 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebe8011_08bc_437a_89d5_f7aecaedceb5.slice/crio-c0d3e8ee76bbf0062866a95a8bea56ec1aacb693db43873153ddc8900bf37d54 WatchSource:0}: Error finding container c0d3e8ee76bbf0062866a95a8bea56ec1aacb693db43873153ddc8900bf37d54: Status 404 returned error can't find the container with id c0d3e8ee76bbf0062866a95a8bea56ec1aacb693db43873153ddc8900bf37d54 Feb 19 19:46:05 crc kubenswrapper[4787]: W0219 19:46:05.061142 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea956f9c_808d_4a82_88e9_83cc34c223c2.slice/crio-d1062c1bbdc7d4bf1113781d908b54d7e89f2a7ac098769202e2f7fc2464e7d6 WatchSource:0}: Error finding container d1062c1bbdc7d4bf1113781d908b54d7e89f2a7ac098769202e2f7fc2464e7d6: Status 404 returned error can't find the container with id d1062c1bbdc7d4bf1113781d908b54d7e89f2a7ac098769202e2f7fc2464e7d6 Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.067767 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-k45lq"] Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.120167 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.296117 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eebe8011-08bc-437a-89d5-f7aecaedceb5","Type":"ContainerStarted","Data":"c0d3e8ee76bbf0062866a95a8bea56ec1aacb693db43873153ddc8900bf37d54"} Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.305505 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerStarted","Data":"898b6f0e71fdecf1cba1ba32ae9c388365ccfb7fcd03b968448961d95035e306"} Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.317310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" event={"ID":"ea956f9c-808d-4a82-88e9-83cc34c223c2","Type":"ContainerStarted","Data":"d1062c1bbdc7d4bf1113781d908b54d7e89f2a7ac098769202e2f7fc2464e7d6"} Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.327625 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8pm7" Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.417152 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8pm7"] Feb 19 19:46:05 crc kubenswrapper[4787]: I0219 19:46:05.434509 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d8pm7"] Feb 19 19:46:06 crc kubenswrapper[4787]: I0219 19:46:06.343943 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef","Type":"ContainerStarted","Data":"c1c4f4e068a3f24d51766941f1e328ca25ac69d3adc7018edcc7460f99195d92"} Feb 19 19:46:06 crc kubenswrapper[4787]: I0219 19:46:06.346686 4787 generic.go:334] "Generic (PLEG): container finished" podID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerID="9d0288e998fc385d906df470c9e01c74cea8f30e0f5326641379be898fb65021" exitCode=0 Feb 19 19:46:06 crc kubenswrapper[4787]: I0219 19:46:06.346724 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" event={"ID":"ea956f9c-808d-4a82-88e9-83cc34c223c2","Type":"ContainerDied","Data":"9d0288e998fc385d906df470c9e01c74cea8f30e0f5326641379be898fb65021"} Feb 19 19:46:06 crc kubenswrapper[4787]: I0219 19:46:06.906713 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" path="/var/lib/kubelet/pods/218fd09b-001e-43a3-bb12-fb7f7730baea/volumes" Feb 19 19:46:07 crc kubenswrapper[4787]: I0219 19:46:07.365549 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef","Type":"ContainerStarted","Data":"384f4601b8450b5236d0e9cd0e103d588730ac9d2cfd6e27983e5e6e1289d6fd"} Feb 19 19:46:08 crc kubenswrapper[4787]: I0219 19:46:08.380009 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eebe8011-08bc-437a-89d5-f7aecaedceb5","Type":"ContainerStarted","Data":"f9fc7be8a9847855f609c07eb32028a39ed6d819833f45bb74656da8ab4013c3"} Feb 19 19:46:10 crc kubenswrapper[4787]: I0219 19:46:10.408214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerStarted","Data":"c9c41d14362b0ee2efd84b2ed33a65f829082643f053579f9a9bb3dec964ebdf"} Feb 19 19:46:10 crc kubenswrapper[4787]: I0219 19:46:10.412068 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" event={"ID":"ea956f9c-808d-4a82-88e9-83cc34c223c2","Type":"ContainerStarted","Data":"ed3bc4dd9b60506ede5da1aaef0ff331f6f51a6a0118fd248b20f8aa2cf93456"} Feb 19 19:46:10 crc kubenswrapper[4787]: I0219 19:46:10.413200 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:46:10 crc kubenswrapper[4787]: I0219 19:46:10.459933 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" podStartSLOduration=12.459913758 podStartE2EDuration="12.459913758s" podCreationTimestamp="2026-02-19 19:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:10.435712524 +0000 UTC m=+1638.226378466" watchObservedRunningTime="2026-02-19 19:46:10.459913758 +0000 UTC m=+1638.250579700" Feb 19 19:46:11 crc kubenswrapper[4787]: I0219 19:46:11.425138 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerStarted","Data":"e55a7f1f9107aa241086f0f06c7208b97262784369b4ca5670f181e19a93e1d8"} Feb 19 19:46:12 crc kubenswrapper[4787]: I0219 19:46:12.438949 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerStarted","Data":"c093d8c6a13061fd930d9992dd9777dd7e91994efdca82deea207bc06aad7b79"} Feb 19 19:46:14 crc kubenswrapper[4787]: I0219 19:46:14.465472 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerStarted","Data":"8b218b1e5846713b8a5bf6a923f4349f95c146c6212415457aa6f4b349e6e953"} Feb 19 19:46:14 crc kubenswrapper[4787]: I0219 19:46:14.467272 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 19:46:14 crc kubenswrapper[4787]: I0219 19:46:14.495439 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=16.167509653 podStartE2EDuration="25.495416827s" podCreationTimestamp="2026-02-19 19:45:49 +0000 UTC" firstStartedPulling="2026-02-19 19:46:04.288723341 +0000 UTC m=+1632.079389283" lastFinishedPulling="2026-02-19 19:46:13.616630515 +0000 UTC m=+1641.407296457" observedRunningTime="2026-02-19 19:46:14.491400254 +0000 UTC m=+1642.282066196" watchObservedRunningTime="2026-02-19 19:46:14.495416827 +0000 UTC m=+1642.286082779" Feb 19 19:46:16 crc kubenswrapper[4787]: I0219 19:46:16.492444 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xq7tp" event={"ID":"fb810906-81bd-42b7-9a2b-0900059baba9","Type":"ContainerStarted","Data":"f6c62747bcaf8d76fa0433c3424c857b1c71872248c5f7e7a223fecdcc2f788d"} Feb 19 19:46:16 crc kubenswrapper[4787]: I0219 19:46:16.510277 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-xq7tp" podStartSLOduration=1.436788226 podStartE2EDuration="36.510259201s" podCreationTimestamp="2026-02-19 19:45:40 +0000 UTC" firstStartedPulling="2026-02-19 19:45:41.021283509 +0000 UTC m=+1608.811949451" lastFinishedPulling="2026-02-19 19:46:16.094754484 +0000 UTC m=+1643.885420426" observedRunningTime="2026-02-19 19:46:16.509220341 +0000 UTC m=+1644.299886283" watchObservedRunningTime="2026-02-19 19:46:16.510259201 +0000 UTC m=+1644.300925143" Feb 19 19:46:18 crc kubenswrapper[4787]: I0219 19:46:18.893365 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:46:18 crc kubenswrapper[4787]: E0219 19:46:18.894069 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:46:18 crc kubenswrapper[4787]: I0219 19:46:18.969876 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.034228 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf"] Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.034501 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerName="dnsmasq-dns" containerID="cri-o://acb394ab72dbe204593957eaceb6ea30b1fdb11914501a13de1dcc9206422ee5" gracePeriod=10 Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.254081 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f88wr"] Feb 19 19:46:19 crc kubenswrapper[4787]: E0219 19:46:19.254588 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="extract-content" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.254619 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="extract-content" Feb 19 19:46:19 crc kubenswrapper[4787]: E0219 19:46:19.254638 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="extract-utilities" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.254646 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="extract-utilities" Feb 19 19:46:19 crc kubenswrapper[4787]: E0219 19:46:19.254686 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="registry-server" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.254692 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="registry-server" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.254897 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="218fd09b-001e-43a3-bb12-fb7f7730baea" containerName="registry-server" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.256226 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.280647 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f88wr"] Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.303584 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.303657 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhtx\" (UniqueName: \"kubernetes.io/projected/e7617c67-0e97-4496-abe7-8b5ab1db282d-kube-api-access-5zhtx\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.303681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.303720 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-config\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.303762 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.303850 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.304022 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.406801 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.406853 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhtx\" (UniqueName: \"kubernetes.io/projected/e7617c67-0e97-4496-abe7-8b5ab1db282d-kube-api-access-5zhtx\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.406875 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.406905 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-config\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.406935 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.407007 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.407110 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.408081 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.408093 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.408489 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.408747 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-config\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.408776 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.409172 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7617c67-0e97-4496-abe7-8b5ab1db282d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.462133 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhtx\" (UniqueName: \"kubernetes.io/projected/e7617c67-0e97-4496-abe7-8b5ab1db282d-kube-api-access-5zhtx\") pod \"dnsmasq-dns-6f6df4f56c-f88wr\" (UID: \"e7617c67-0e97-4496-abe7-8b5ab1db282d\") " pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.582372 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.582784 4787 generic.go:334] "Generic (PLEG): container finished" podID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerID="acb394ab72dbe204593957eaceb6ea30b1fdb11914501a13de1dcc9206422ee5" exitCode=0 Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.582806 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" event={"ID":"ca72ab94-eca6-4c68-8571-dfbf22d53215","Type":"ContainerDied","Data":"acb394ab72dbe204593957eaceb6ea30b1fdb11914501a13de1dcc9206422ee5"} Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.589830 4787 generic.go:334] "Generic (PLEG): container finished" podID="fb810906-81bd-42b7-9a2b-0900059baba9" containerID="f6c62747bcaf8d76fa0433c3424c857b1c71872248c5f7e7a223fecdcc2f788d" exitCode=0 Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.590084 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xq7tp" event={"ID":"fb810906-81bd-42b7-9a2b-0900059baba9","Type":"ContainerDied","Data":"f6c62747bcaf8d76fa0433c3424c857b1c71872248c5f7e7a223fecdcc2f788d"} Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.796079 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.925626 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rx9\" (UniqueName: \"kubernetes.io/projected/ca72ab94-eca6-4c68-8571-dfbf22d53215-kube-api-access-d2rx9\") pod \"ca72ab94-eca6-4c68-8571-dfbf22d53215\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.925754 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-config\") pod \"ca72ab94-eca6-4c68-8571-dfbf22d53215\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.925835 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-sb\") pod \"ca72ab94-eca6-4c68-8571-dfbf22d53215\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.925954 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-swift-storage-0\") pod \"ca72ab94-eca6-4c68-8571-dfbf22d53215\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.926102 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-svc\") pod \"ca72ab94-eca6-4c68-8571-dfbf22d53215\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.926193 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-nb\") pod \"ca72ab94-eca6-4c68-8571-dfbf22d53215\" (UID: \"ca72ab94-eca6-4c68-8571-dfbf22d53215\") " Feb 19 19:46:19 crc kubenswrapper[4787]: I0219 19:46:19.952849 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca72ab94-eca6-4c68-8571-dfbf22d53215-kube-api-access-d2rx9" (OuterVolumeSpecName: "kube-api-access-d2rx9") pod "ca72ab94-eca6-4c68-8571-dfbf22d53215" (UID: "ca72ab94-eca6-4c68-8571-dfbf22d53215"). InnerVolumeSpecName "kube-api-access-d2rx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.027496 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-config" (OuterVolumeSpecName: "config") pod "ca72ab94-eca6-4c68-8571-dfbf22d53215" (UID: "ca72ab94-eca6-4c68-8571-dfbf22d53215"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.029446 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2rx9\" (UniqueName: \"kubernetes.io/projected/ca72ab94-eca6-4c68-8571-dfbf22d53215-kube-api-access-d2rx9\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.029491 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.036105 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca72ab94-eca6-4c68-8571-dfbf22d53215" (UID: "ca72ab94-eca6-4c68-8571-dfbf22d53215"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.092651 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca72ab94-eca6-4c68-8571-dfbf22d53215" (UID: "ca72ab94-eca6-4c68-8571-dfbf22d53215"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.102679 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca72ab94-eca6-4c68-8571-dfbf22d53215" (UID: "ca72ab94-eca6-4c68-8571-dfbf22d53215"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.121560 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca72ab94-eca6-4c68-8571-dfbf22d53215" (UID: "ca72ab94-eca6-4c68-8571-dfbf22d53215"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.132234 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.132272 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.132283 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.132292 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca72ab94-eca6-4c68-8571-dfbf22d53215-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.223735 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-f88wr"] Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.602523 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" event={"ID":"e7617c67-0e97-4496-abe7-8b5ab1db282d","Type":"ContainerStarted","Data":"e52023adc3da525f24eb57aa041bf8a16879ff5dd8bf9ee69b2e9df204b17901"} Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.604659 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" event={"ID":"ca72ab94-eca6-4c68-8571-dfbf22d53215","Type":"ContainerDied","Data":"b34ebd93ad69e7e4a3f56ef38c546b99471a122fbd0ca0b833078e37e9021ce7"} Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.604724 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.604788 4787 scope.go:117] "RemoveContainer" containerID="acb394ab72dbe204593957eaceb6ea30b1fdb11914501a13de1dcc9206422ee5" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.646464 4787 scope.go:117] "RemoveContainer" containerID="1c9315015aba9bec3414ce09d2c8bb1e20c52429768cef81706a151c8c31ed4e" Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.649361 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf"] Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.667073 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-7ndsf"] Feb 19 19:46:20 crc kubenswrapper[4787]: I0219 19:46:20.907781 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" path="/var/lib/kubelet/pods/ca72ab94-eca6-4c68-8571-dfbf22d53215/volumes" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.101673 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xq7tp" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.265462 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvsfn\" (UniqueName: \"kubernetes.io/projected/fb810906-81bd-42b7-9a2b-0900059baba9-kube-api-access-bvsfn\") pod \"fb810906-81bd-42b7-9a2b-0900059baba9\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.265630 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-config-data\") pod \"fb810906-81bd-42b7-9a2b-0900059baba9\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.265695 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-combined-ca-bundle\") pod \"fb810906-81bd-42b7-9a2b-0900059baba9\" (UID: \"fb810906-81bd-42b7-9a2b-0900059baba9\") " Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.271901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb810906-81bd-42b7-9a2b-0900059baba9-kube-api-access-bvsfn" (OuterVolumeSpecName: "kube-api-access-bvsfn") pod "fb810906-81bd-42b7-9a2b-0900059baba9" (UID: "fb810906-81bd-42b7-9a2b-0900059baba9"). InnerVolumeSpecName "kube-api-access-bvsfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.302817 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb810906-81bd-42b7-9a2b-0900059baba9" (UID: "fb810906-81bd-42b7-9a2b-0900059baba9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.359711 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-config-data" (OuterVolumeSpecName: "config-data") pod "fb810906-81bd-42b7-9a2b-0900059baba9" (UID: "fb810906-81bd-42b7-9a2b-0900059baba9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.368685 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.368717 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvsfn\" (UniqueName: \"kubernetes.io/projected/fb810906-81bd-42b7-9a2b-0900059baba9-kube-api-access-bvsfn\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.368729 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb810906-81bd-42b7-9a2b-0900059baba9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.625063 4787 generic.go:334] "Generic (PLEG): container finished" podID="e7617c67-0e97-4496-abe7-8b5ab1db282d" containerID="c519a4395d5572d627b84bb9dcd828bd88792c48adedf9850cb57924af4744e2" exitCode=0 Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.625183 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" event={"ID":"e7617c67-0e97-4496-abe7-8b5ab1db282d","Type":"ContainerDied","Data":"c519a4395d5572d627b84bb9dcd828bd88792c48adedf9850cb57924af4744e2"} Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.654445 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xq7tp" event={"ID":"fb810906-81bd-42b7-9a2b-0900059baba9","Type":"ContainerDied","Data":"226fc085ba3fedf7ca263fd728bf5f97dd23bc49397d71df56920c128d445f5d"} Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.654484 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="226fc085ba3fedf7ca263fd728bf5f97dd23bc49397d71df56920c128d445f5d" Feb 19 19:46:21 crc kubenswrapper[4787]: I0219 19:46:21.654528 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xq7tp" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.688528 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6bd6f6c8df-52ltw"] Feb 19 19:46:22 crc kubenswrapper[4787]: E0219 19:46:22.695063 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerName="init" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.695086 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerName="init" Feb 19 19:46:22 crc kubenswrapper[4787]: E0219 19:46:22.695095 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerName="dnsmasq-dns" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.695101 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerName="dnsmasq-dns" Feb 19 19:46:22 crc kubenswrapper[4787]: E0219 19:46:22.695158 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb810906-81bd-42b7-9a2b-0900059baba9" containerName="heat-db-sync" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.695167 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb810906-81bd-42b7-9a2b-0900059baba9" containerName="heat-db-sync" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.695452 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb810906-81bd-42b7-9a2b-0900059baba9" containerName="heat-db-sync" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.695485 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca72ab94-eca6-4c68-8571-dfbf22d53215" containerName="dnsmasq-dns" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.704360 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.709992 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6bd6f6c8df-52ltw"] Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.741351 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" event={"ID":"e7617c67-0e97-4496-abe7-8b5ab1db282d","Type":"ContainerStarted","Data":"96f7f7216b245febcdc7cc9034fd11125810b148b550e45b94ac8e6408c5c7a8"} Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.742841 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.788728 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-68df46bdff-kbz99"] Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.796833 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.807035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-combined-ca-bundle\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.807210 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-config-data\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.807254 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-config-data-custom\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.807298 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6bq8\" (UniqueName: \"kubernetes.io/projected/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-kube-api-access-d6bq8\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.832213 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5b6df797bd-hbhzc"] Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.833948 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.874226 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68df46bdff-kbz99"] Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.880178 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" podStartSLOduration=3.88016097 podStartE2EDuration="3.88016097s" podCreationTimestamp="2026-02-19 19:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:22.771367527 +0000 UTC m=+1650.562033489" watchObservedRunningTime="2026-02-19 19:46:22.88016097 +0000 UTC m=+1650.670826912" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.908093 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b6df797bd-hbhzc"] Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.910020 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-public-tls-certs\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.910241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-combined-ca-bundle\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.910507 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-combined-ca-bundle\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.910781 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-config-data\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.912643 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-config-data\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.912679 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-config-data-custom\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.912751 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-config-data-custom\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.912815 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6bq8\" (UniqueName: \"kubernetes.io/projected/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-kube-api-access-d6bq8\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.912843 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tchm6\" (UniqueName: \"kubernetes.io/projected/8467aa04-3865-45da-8f9f-98011798d7d6-kube-api-access-tchm6\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.912896 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-internal-tls-certs\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.920356 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-config-data-custom\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.921597 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-combined-ca-bundle\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.922571 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-config-data\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:22 crc kubenswrapper[4787]: I0219 19:46:22.928290 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6bq8\" (UniqueName: \"kubernetes.io/projected/ee9c5ff1-8d4f-4bd2-b00c-4679c695e557-kube-api-access-d6bq8\") pod \"heat-engine-6bd6f6c8df-52ltw\" (UID: \"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557\") " pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015400 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-combined-ca-bundle\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015515 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-config-data-custom\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015599 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-config-data\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015699 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-internal-tls-certs\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015748 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-config-data-custom\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-public-tls-certs\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015869 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tchm6\" (UniqueName: \"kubernetes.io/projected/8467aa04-3865-45da-8f9f-98011798d7d6-kube-api-access-tchm6\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015899 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-internal-tls-certs\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-combined-ca-bundle\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.015962 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz5lh\" (UniqueName: \"kubernetes.io/projected/70991400-5d96-44fe-934f-c866defe8adb-kube-api-access-rz5lh\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.016162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-public-tls-certs\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.016292 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-config-data\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.021124 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-config-data-custom\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.021389 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-internal-tls-certs\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.024447 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-combined-ca-bundle\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.025665 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-config-data\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.027831 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8467aa04-3865-45da-8f9f-98011798d7d6-public-tls-certs\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.036346 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tchm6\" (UniqueName: \"kubernetes.io/projected/8467aa04-3865-45da-8f9f-98011798d7d6-kube-api-access-tchm6\") pod \"heat-cfnapi-68df46bdff-kbz99\" (UID: \"8467aa04-3865-45da-8f9f-98011798d7d6\") " pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.106712 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.118759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-internal-tls-certs\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.119644 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-public-tls-certs\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.119737 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-combined-ca-bundle\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.119770 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz5lh\" (UniqueName: \"kubernetes.io/projected/70991400-5d96-44fe-934f-c866defe8adb-kube-api-access-rz5lh\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.119963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-config-data\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.120119 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-config-data-custom\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.119472 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.124003 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-public-tls-certs\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.124557 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-config-data-custom\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.126994 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-combined-ca-bundle\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.129782 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-config-data\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.130050 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70991400-5d96-44fe-934f-c866defe8adb-internal-tls-certs\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.138000 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz5lh\" (UniqueName: \"kubernetes.io/projected/70991400-5d96-44fe-934f-c866defe8adb-kube-api-access-rz5lh\") pod \"heat-api-5b6df797bd-hbhzc\" (UID: \"70991400-5d96-44fe-934f-c866defe8adb\") " pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.160264 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:23 crc kubenswrapper[4787]: W0219 19:46:23.660192 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8467aa04_3865_45da_8f9f_98011798d7d6.slice/crio-d2fdae236892ff0d8a5954edd607e73af40595c9c38e320716313a93d15cc920 WatchSource:0}: Error finding container d2fdae236892ff0d8a5954edd607e73af40595c9c38e320716313a93d15cc920: Status 404 returned error can't find the container with id d2fdae236892ff0d8a5954edd607e73af40595c9c38e320716313a93d15cc920 Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.661749 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68df46bdff-kbz99"] Feb 19 19:46:23 crc kubenswrapper[4787]: W0219 19:46:23.767122 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70991400_5d96_44fe_934f_c866defe8adb.slice/crio-8945b2f5534fa3ed384d830910e7568ae6f70ca02ef6a45aff8c99cf6c7fa551 WatchSource:0}: Error finding container 8945b2f5534fa3ed384d830910e7568ae6f70ca02ef6a45aff8c99cf6c7fa551: Status 404 returned error can't find the container with id 8945b2f5534fa3ed384d830910e7568ae6f70ca02ef6a45aff8c99cf6c7fa551 Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.788236 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68df46bdff-kbz99" event={"ID":"8467aa04-3865-45da-8f9f-98011798d7d6","Type":"ContainerStarted","Data":"d2fdae236892ff0d8a5954edd607e73af40595c9c38e320716313a93d15cc920"} Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.791403 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6bd6f6c8df-52ltw"] Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.791980 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6bd6f6c8df-52ltw" event={"ID":"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557","Type":"ContainerStarted","Data":"bbcebf32b90a2c533ad8abc077d14ca4916039f76101fcb03b2d54ebb57c6578"} Feb 19 19:46:23 crc kubenswrapper[4787]: I0219 19:46:23.809544 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b6df797bd-hbhzc"] Feb 19 19:46:24 crc kubenswrapper[4787]: I0219 19:46:24.803930 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6bd6f6c8df-52ltw" event={"ID":"ee9c5ff1-8d4f-4bd2-b00c-4679c695e557","Type":"ContainerStarted","Data":"4472ec6b0c8f537294629e3b8a2f93f2ff2f8354e720f69fc01e214eec480ca7"} Feb 19 19:46:24 crc kubenswrapper[4787]: I0219 19:46:24.804188 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:24 crc kubenswrapper[4787]: I0219 19:46:24.805861 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b6df797bd-hbhzc" event={"ID":"70991400-5d96-44fe-934f-c866defe8adb","Type":"ContainerStarted","Data":"8945b2f5534fa3ed384d830910e7568ae6f70ca02ef6a45aff8c99cf6c7fa551"} Feb 19 19:46:24 crc kubenswrapper[4787]: I0219 19:46:24.828818 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6bd6f6c8df-52ltw" podStartSLOduration=2.828796603 podStartE2EDuration="2.828796603s" podCreationTimestamp="2026-02-19 19:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:24.819244793 +0000 UTC m=+1652.609910735" watchObservedRunningTime="2026-02-19 19:46:24.828796603 +0000 UTC m=+1652.619462545" Feb 19 19:46:26 crc kubenswrapper[4787]: I0219 19:46:26.846882 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b6df797bd-hbhzc" event={"ID":"70991400-5d96-44fe-934f-c866defe8adb","Type":"ContainerStarted","Data":"6fd62ee1f16e111349890e156e42b649d9c135ba8b29cccae9166f2925288f32"} Feb 19 19:46:26 crc kubenswrapper[4787]: I0219 19:46:26.847482 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:26 crc kubenswrapper[4787]: I0219 19:46:26.849924 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68df46bdff-kbz99" event={"ID":"8467aa04-3865-45da-8f9f-98011798d7d6","Type":"ContainerStarted","Data":"8c76e4f0a9df60ec2e2d5e097225b34de03085f36451c34f390fa413eb1f48c9"} Feb 19 19:46:26 crc kubenswrapper[4787]: I0219 19:46:26.850158 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:26 crc kubenswrapper[4787]: I0219 19:46:26.888226 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5b6df797bd-hbhzc" podStartSLOduration=3.1867878259999998 podStartE2EDuration="4.888201105s" podCreationTimestamp="2026-02-19 19:46:22 +0000 UTC" firstStartedPulling="2026-02-19 19:46:23.773631158 +0000 UTC m=+1651.564297100" lastFinishedPulling="2026-02-19 19:46:25.475044437 +0000 UTC m=+1653.265710379" observedRunningTime="2026-02-19 19:46:26.868009655 +0000 UTC m=+1654.658675687" watchObservedRunningTime="2026-02-19 19:46:26.888201105 +0000 UTC m=+1654.678867057" Feb 19 19:46:26 crc kubenswrapper[4787]: I0219 19:46:26.907323 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-68df46bdff-kbz99" podStartSLOduration=3.093608714 podStartE2EDuration="4.907290614s" podCreationTimestamp="2026-02-19 19:46:22 +0000 UTC" firstStartedPulling="2026-02-19 19:46:23.663877078 +0000 UTC m=+1651.454543020" lastFinishedPulling="2026-02-19 19:46:25.477558978 +0000 UTC m=+1653.268224920" observedRunningTime="2026-02-19 19:46:26.893772442 +0000 UTC m=+1654.684438384" watchObservedRunningTime="2026-02-19 19:46:26.907290614 +0000 UTC m=+1654.697956606" Feb 19 19:46:29 crc kubenswrapper[4787]: I0219 19:46:29.584914 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-f88wr" Feb 19 19:46:29 crc kubenswrapper[4787]: I0219 19:46:29.682450 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-k45lq"] Feb 19 19:46:29 crc kubenswrapper[4787]: I0219 19:46:29.682805 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerName="dnsmasq-dns" containerID="cri-o://ed3bc4dd9b60506ede5da1aaef0ff331f6f51a6a0118fd248b20f8aa2cf93456" gracePeriod=10 Feb 19 19:46:29 crc kubenswrapper[4787]: I0219 19:46:29.888840 4787 generic.go:334] "Generic (PLEG): container finished" podID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerID="ed3bc4dd9b60506ede5da1aaef0ff331f6f51a6a0118fd248b20f8aa2cf93456" exitCode=0 Feb 19 19:46:29 crc kubenswrapper[4787]: I0219 19:46:29.889212 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" event={"ID":"ea956f9c-808d-4a82-88e9-83cc34c223c2","Type":"ContainerDied","Data":"ed3bc4dd9b60506ede5da1aaef0ff331f6f51a6a0118fd248b20f8aa2cf93456"} Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.342959 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507214 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-swift-storage-0\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507566 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-openstack-edpm-ipam\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507590 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-sb\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507626 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w59zq\" (UniqueName: \"kubernetes.io/projected/ea956f9c-808d-4a82-88e9-83cc34c223c2-kube-api-access-w59zq\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507645 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-config\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507676 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-nb\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.507870 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-svc\") pod \"ea956f9c-808d-4a82-88e9-83cc34c223c2\" (UID: \"ea956f9c-808d-4a82-88e9-83cc34c223c2\") " Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.531815 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea956f9c-808d-4a82-88e9-83cc34c223c2-kube-api-access-w59zq" (OuterVolumeSpecName: "kube-api-access-w59zq") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "kube-api-access-w59zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.596232 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.597109 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.610263 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.611250 4787 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.611349 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.611413 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.611515 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w59zq\" (UniqueName: \"kubernetes.io/projected/ea956f9c-808d-4a82-88e9-83cc34c223c2-kube-api-access-w59zq\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.613649 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-config" (OuterVolumeSpecName: "config") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.616951 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.631952 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ea956f9c-808d-4a82-88e9-83cc34c223c2" (UID: "ea956f9c-808d-4a82-88e9-83cc34c223c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.714658 4787 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-config\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.714697 4787 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.714708 4787 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ea956f9c-808d-4a82-88e9-83cc34c223c2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.903152 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.904768 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" event={"ID":"ea956f9c-808d-4a82-88e9-83cc34c223c2","Type":"ContainerDied","Data":"d1062c1bbdc7d4bf1113781d908b54d7e89f2a7ac098769202e2f7fc2464e7d6"} Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.904814 4787 scope.go:117] "RemoveContainer" containerID="ed3bc4dd9b60506ede5da1aaef0ff331f6f51a6a0118fd248b20f8aa2cf93456" Feb 19 19:46:30 crc kubenswrapper[4787]: I0219 19:46:30.933576 4787 scope.go:117] "RemoveContainer" containerID="9d0288e998fc385d906df470c9e01c74cea8f30e0f5326641379be898fb65021" Feb 19 19:46:32 crc kubenswrapper[4787]: I0219 19:46:32.902400 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:46:32 crc kubenswrapper[4787]: E0219 19:46:32.903187 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:46:33 crc kubenswrapper[4787]: I0219 19:46:33.168802 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6bd6f6c8df-52ltw" Feb 19 19:46:33 crc kubenswrapper[4787]: I0219 19:46:33.247948 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b644788db-6g25v"] Feb 19 19:46:33 crc kubenswrapper[4787]: I0219 19:46:33.248239 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5b644788db-6g25v" podUID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" containerName="heat-engine" containerID="cri-o://3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85" gracePeriod=60 Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.372024 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2"] Feb 19 19:46:34 crc kubenswrapper[4787]: E0219 19:46:34.386065 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerName="init" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.386091 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerName="init" Feb 19 19:46:34 crc kubenswrapper[4787]: E0219 19:46:34.386114 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerName="dnsmasq-dns" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.386123 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerName="dnsmasq-dns" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.386436 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" containerName="dnsmasq-dns" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.387448 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2"] Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.387524 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.391219 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.391811 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.391940 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.392069 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.413297 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.413407 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jjl\" (UniqueName: \"kubernetes.io/projected/17844a3d-7feb-457b-8e01-f38398e34b63-kube-api-access-c4jjl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.413529 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.413630 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.515892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.515945 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jjl\" (UniqueName: \"kubernetes.io/projected/17844a3d-7feb-457b-8e01-f38398e34b63-kube-api-access-c4jjl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.516005 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.516047 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.523435 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.523550 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.524385 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.546369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jjl\" (UniqueName: \"kubernetes.io/projected/17844a3d-7feb-457b-8e01-f38398e34b63-kube-api-access-c4jjl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.725901 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.823192 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5b6df797bd-hbhzc" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.844943 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-68df46bdff-kbz99" Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.929873 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c8d9fd94c-f2vvf"] Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.930083 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6c8d9fd94c-f2vvf" podUID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" containerName="heat-api" containerID="cri-o://03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2" gracePeriod=60 Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.964971 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5b5b7b88f4-48559"] Feb 19 19:46:34 crc kubenswrapper[4787]: I0219 19:46:34.965186 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" podUID="c965959c-8adc-4c6e-a275-d05e8a3b7223" containerName="heat-cfnapi" containerID="cri-o://b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9" gracePeriod=60 Feb 19 19:46:35 crc kubenswrapper[4787]: I0219 19:46:35.672093 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2"] Feb 19 19:46:35 crc kubenswrapper[4787]: I0219 19:46:35.984249 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" event={"ID":"17844a3d-7feb-457b-8e01-f38398e34b63","Type":"ContainerStarted","Data":"6c7d22643321760daa37dd2d5dfb6edb6a9236e85a5bb095f7ce77a576b5026f"} Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.143690 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" podUID="c965959c-8adc-4c6e-a275-d05e8a3b7223" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.227:8000/healthcheck\": read tcp 10.217.0.2:58848->10.217.0.227:8000: read: connection reset by peer" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.496882 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6c8d9fd94c-f2vvf" podUID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.226:8004/healthcheck\": read tcp 10.217.0.2:36032->10.217.0.226:8004: read: connection reset by peer" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.697236 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.729816 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t95n8\" (UniqueName: \"kubernetes.io/projected/c965959c-8adc-4c6e-a275-d05e8a3b7223-kube-api-access-t95n8\") pod \"c965959c-8adc-4c6e-a275-d05e8a3b7223\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.729874 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-combined-ca-bundle\") pod \"c965959c-8adc-4c6e-a275-d05e8a3b7223\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.729923 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data-custom\") pod \"c965959c-8adc-4c6e-a275-d05e8a3b7223\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.730115 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-public-tls-certs\") pod \"c965959c-8adc-4c6e-a275-d05e8a3b7223\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.730928 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-internal-tls-certs\") pod \"c965959c-8adc-4c6e-a275-d05e8a3b7223\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.730954 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data\") pod \"c965959c-8adc-4c6e-a275-d05e8a3b7223\" (UID: \"c965959c-8adc-4c6e-a275-d05e8a3b7223\") " Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.760781 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c965959c-8adc-4c6e-a275-d05e8a3b7223" (UID: "c965959c-8adc-4c6e-a275-d05e8a3b7223"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:38 crc kubenswrapper[4787]: E0219 19:46:38.760789 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 19:46:38 crc kubenswrapper[4787]: E0219 19:46:38.765574 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.769713 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c965959c-8adc-4c6e-a275-d05e8a3b7223" (UID: "c965959c-8adc-4c6e-a275-d05e8a3b7223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:38 crc kubenswrapper[4787]: E0219 19:46:38.770674 4787 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 19 19:46:38 crc kubenswrapper[4787]: E0219 19:46:38.770728 4787 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b644788db-6g25v" podUID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" containerName="heat-engine" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.779946 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c965959c-8adc-4c6e-a275-d05e8a3b7223-kube-api-access-t95n8" (OuterVolumeSpecName: "kube-api-access-t95n8") pod "c965959c-8adc-4c6e-a275-d05e8a3b7223" (UID: "c965959c-8adc-4c6e-a275-d05e8a3b7223"). InnerVolumeSpecName "kube-api-access-t95n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.825393 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c965959c-8adc-4c6e-a275-d05e8a3b7223" (UID: "c965959c-8adc-4c6e-a275-d05e8a3b7223"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.825530 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data" (OuterVolumeSpecName: "config-data") pod "c965959c-8adc-4c6e-a275-d05e8a3b7223" (UID: "c965959c-8adc-4c6e-a275-d05e8a3b7223"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.834466 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t95n8\" (UniqueName: \"kubernetes.io/projected/c965959c-8adc-4c6e-a275-d05e8a3b7223-kube-api-access-t95n8\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.834504 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.834516 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.834527 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.834539 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.915835 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c965959c-8adc-4c6e-a275-d05e8a3b7223" (UID: "c965959c-8adc-4c6e-a275-d05e8a3b7223"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:38 crc kubenswrapper[4787]: I0219 19:46:38.939815 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c965959c-8adc-4c6e-a275-d05e8a3b7223-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.004355 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.042906 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-internal-tls-certs\") pod \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.043304 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data-custom\") pod \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.043490 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng2x6\" (UniqueName: \"kubernetes.io/projected/a10056f0-3bd6-4c6b-891b-b671799f5d9d-kube-api-access-ng2x6\") pod \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.043659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-combined-ca-bundle\") pod \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.043807 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-public-tls-certs\") pod \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.044007 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data\") pod \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\" (UID: \"a10056f0-3bd6-4c6b-891b-b671799f5d9d\") " Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.054140 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a10056f0-3bd6-4c6b-891b-b671799f5d9d" (UID: "a10056f0-3bd6-4c6b-891b-b671799f5d9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.054578 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10056f0-3bd6-4c6b-891b-b671799f5d9d-kube-api-access-ng2x6" (OuterVolumeSpecName: "kube-api-access-ng2x6") pod "a10056f0-3bd6-4c6b-891b-b671799f5d9d" (UID: "a10056f0-3bd6-4c6b-891b-b671799f5d9d"). InnerVolumeSpecName "kube-api-access-ng2x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.055785 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8d9fd94c-f2vvf" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.055809 4787 generic.go:334] "Generic (PLEG): container finished" podID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" containerID="03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2" exitCode=0 Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.055849 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8d9fd94c-f2vvf" event={"ID":"a10056f0-3bd6-4c6b-891b-b671799f5d9d","Type":"ContainerDied","Data":"03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2"} Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.057350 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8d9fd94c-f2vvf" event={"ID":"a10056f0-3bd6-4c6b-891b-b671799f5d9d","Type":"ContainerDied","Data":"02a6a5370f35bd568a8a555f4a60c8747c61d823c83378c3d13a0ac82808a49f"} Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.057453 4787 scope.go:117] "RemoveContainer" containerID="03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.068004 4787 generic.go:334] "Generic (PLEG): container finished" podID="c965959c-8adc-4c6e-a275-d05e8a3b7223" containerID="b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9" exitCode=0 Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.069319 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" event={"ID":"c965959c-8adc-4c6e-a275-d05e8a3b7223","Type":"ContainerDied","Data":"b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9"} Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.069903 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.070102 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5b5b7b88f4-48559" event={"ID":"c965959c-8adc-4c6e-a275-d05e8a3b7223","Type":"ContainerDied","Data":"91cb0aed6b3ab87da9532462413bbc4407e73cf3d7e5950500b77afd2bba0b56"} Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.099563 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a10056f0-3bd6-4c6b-891b-b671799f5d9d" (UID: "a10056f0-3bd6-4c6b-891b-b671799f5d9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.115709 4787 scope.go:117] "RemoveContainer" containerID="03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2" Feb 19 19:46:39 crc kubenswrapper[4787]: E0219 19:46:39.116328 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2\": container with ID starting with 03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2 not found: ID does not exist" containerID="03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.116686 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2"} err="failed to get container status \"03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2\": rpc error: code = NotFound desc = could not find container \"03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2\": container with ID starting with 03bb7173f8af0f5fa77e01671e661130b243ef5f705c87ee03e116d8736dbdf2 not found: ID does not exist" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.116721 4787 scope.go:117] "RemoveContainer" containerID="b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.121994 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5b5b7b88f4-48559"] Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.140037 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a10056f0-3bd6-4c6b-891b-b671799f5d9d" (UID: "a10056f0-3bd6-4c6b-891b-b671799f5d9d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.147137 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.147172 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng2x6\" (UniqueName: \"kubernetes.io/projected/a10056f0-3bd6-4c6b-891b-b671799f5d9d-kube-api-access-ng2x6\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.147187 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.147201 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.152846 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data" (OuterVolumeSpecName: "config-data") pod "a10056f0-3bd6-4c6b-891b-b671799f5d9d" (UID: "a10056f0-3bd6-4c6b-891b-b671799f5d9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.165241 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5b5b7b88f4-48559"] Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.187747 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a10056f0-3bd6-4c6b-891b-b671799f5d9d" (UID: "a10056f0-3bd6-4c6b-891b-b671799f5d9d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.191070 4787 scope.go:117] "RemoveContainer" containerID="b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9" Feb 19 19:46:39 crc kubenswrapper[4787]: E0219 19:46:39.191642 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9\": container with ID starting with b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9 not found: ID does not exist" containerID="b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.191677 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9"} err="failed to get container status \"b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9\": rpc error: code = NotFound desc = could not find container \"b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9\": container with ID starting with b910d9aea4ef32a3da2d0524f40d4030845bc8eeb1f09dd1e177e8fd645b7ad9 not found: ID does not exist" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.252157 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.252185 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a10056f0-3bd6-4c6b-891b-b671799f5d9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.483147 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c8d9fd94c-f2vvf"] Feb 19 19:46:39 crc kubenswrapper[4787]: I0219 19:46:39.495892 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c8d9fd94c-f2vvf"] Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.086228 4787 generic.go:334] "Generic (PLEG): container finished" podID="8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef" containerID="384f4601b8450b5236d0e9cd0e103d588730ac9d2cfd6e27983e5e6e1289d6fd" exitCode=0 Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.086300 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef","Type":"ContainerDied","Data":"384f4601b8450b5236d0e9cd0e103d588730ac9d2cfd6e27983e5e6e1289d6fd"} Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.091208 4787 generic.go:334] "Generic (PLEG): container finished" podID="eebe8011-08bc-437a-89d5-f7aecaedceb5" containerID="f9fc7be8a9847855f609c07eb32028a39ed6d819833f45bb74656da8ab4013c3" exitCode=0 Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.091244 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eebe8011-08bc-437a-89d5-f7aecaedceb5","Type":"ContainerDied","Data":"f9fc7be8a9847855f609c07eb32028a39ed6d819833f45bb74656da8ab4013c3"} Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.173237 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-h476n"] Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.196752 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-h476n"] Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.269682 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-bpfk6"] Feb 19 19:46:40 crc kubenswrapper[4787]: E0219 19:46:40.270361 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c965959c-8adc-4c6e-a275-d05e8a3b7223" containerName="heat-cfnapi" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.270385 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c965959c-8adc-4c6e-a275-d05e8a3b7223" containerName="heat-cfnapi" Feb 19 19:46:40 crc kubenswrapper[4787]: E0219 19:46:40.270399 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" containerName="heat-api" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.270407 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" containerName="heat-api" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.270734 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" containerName="heat-api" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.270771 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c965959c-8adc-4c6e-a275-d05e8a3b7223" containerName="heat-cfnapi" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.271885 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.273540 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.306784 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bpfk6"] Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.388401 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-combined-ca-bundle\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.388810 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-config-data\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.388960 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-scripts\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.389184 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xqf\" (UniqueName: \"kubernetes.io/projected/b28778f5-042a-4379-b738-86ef9f31a6df-kube-api-access-w8xqf\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.490939 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8xqf\" (UniqueName: \"kubernetes.io/projected/b28778f5-042a-4379-b738-86ef9f31a6df-kube-api-access-w8xqf\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.491018 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-combined-ca-bundle\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.491111 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-config-data\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.491157 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-scripts\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.509095 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-config-data\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.509462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-scripts\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.515930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8xqf\" (UniqueName: \"kubernetes.io/projected/b28778f5-042a-4379-b738-86ef9f31a6df-kube-api-access-w8xqf\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.516859 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-combined-ca-bundle\") pod \"aodh-db-sync-bpfk6\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.612021 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.908957 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10056f0-3bd6-4c6b-891b-b671799f5d9d" path="/var/lib/kubelet/pods/a10056f0-3bd6-4c6b-891b-b671799f5d9d/volumes" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.909711 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29b6fec-6028-4cc7-b7b8-0bf461e57fe4" path="/var/lib/kubelet/pods/c29b6fec-6028-4cc7-b7b8-0bf461e57fe4/volumes" Feb 19 19:46:40 crc kubenswrapper[4787]: I0219 19:46:40.910547 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c965959c-8adc-4c6e-a275-d05e8a3b7223" path="/var/lib/kubelet/pods/c965959c-8adc-4c6e-a275-d05e8a3b7223/volumes" Feb 19 19:46:45 crc kubenswrapper[4787]: I0219 19:46:45.156580 4787 generic.go:334] "Generic (PLEG): container finished" podID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" containerID="3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85" exitCode=0 Feb 19 19:46:45 crc kubenswrapper[4787]: I0219 19:46:45.157049 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b644788db-6g25v" event={"ID":"10c7b3fe-a75d-45bd-ba12-9a801a77798e","Type":"ContainerDied","Data":"3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85"} Feb 19 19:46:45 crc kubenswrapper[4787]: I0219 19:46:45.892171 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:46:45 crc kubenswrapper[4787]: E0219 19:46:45.892847 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.345912 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.455750 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data\") pod \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.455954 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhfc6\" (UniqueName: \"kubernetes.io/projected/10c7b3fe-a75d-45bd-ba12-9a801a77798e-kube-api-access-qhfc6\") pod \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.456049 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data-custom\") pod \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.456106 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-combined-ca-bundle\") pod \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\" (UID: \"10c7b3fe-a75d-45bd-ba12-9a801a77798e\") " Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.461259 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10c7b3fe-a75d-45bd-ba12-9a801a77798e" (UID: "10c7b3fe-a75d-45bd-ba12-9a801a77798e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.463116 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c7b3fe-a75d-45bd-ba12-9a801a77798e-kube-api-access-qhfc6" (OuterVolumeSpecName: "kube-api-access-qhfc6") pod "10c7b3fe-a75d-45bd-ba12-9a801a77798e" (UID: "10c7b3fe-a75d-45bd-ba12-9a801a77798e"). InnerVolumeSpecName "kube-api-access-qhfc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.519756 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bpfk6"] Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.520827 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10c7b3fe-a75d-45bd-ba12-9a801a77798e" (UID: "10c7b3fe-a75d-45bd-ba12-9a801a77798e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.547806 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data" (OuterVolumeSpecName: "config-data") pod "10c7b3fe-a75d-45bd-ba12-9a801a77798e" (UID: "10c7b3fe-a75d-45bd-ba12-9a801a77798e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.559557 4787 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.559588 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.559629 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c7b3fe-a75d-45bd-ba12-9a801a77798e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:46 crc kubenswrapper[4787]: I0219 19:46:46.559642 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhfc6\" (UniqueName: \"kubernetes.io/projected/10c7b3fe-a75d-45bd-ba12-9a801a77798e-kube-api-access-qhfc6\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.184977 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b644788db-6g25v" event={"ID":"10c7b3fe-a75d-45bd-ba12-9a801a77798e","Type":"ContainerDied","Data":"021abd964a69c2aa3eaf95f56a2bfab02e148821fe93ed4bde1ae8924f2e5556"} Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.185317 4787 scope.go:117] "RemoveContainer" containerID="3df0c32309122e2a1d29be941e8245aa2a7f33ad45eef69dc0b9473c2bbe3b85" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.185448 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b644788db-6g25v" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.193694 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eebe8011-08bc-437a-89d5-f7aecaedceb5","Type":"ContainerStarted","Data":"cdd5fa4282a892b7804fb8fe9a10c6377cf30c52511527c0e2d24eba57f0d2b9"} Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.194684 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.196385 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" event={"ID":"17844a3d-7feb-457b-8e01-f38398e34b63","Type":"ContainerStarted","Data":"3a2cc412a6ebcf3255660e9acb2b6a85eccf26d088d1fe0bdf02724cbe523cdc"} Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.199047 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bpfk6" event={"ID":"b28778f5-042a-4379-b738-86ef9f31a6df","Type":"ContainerStarted","Data":"bc7387b6a2e35ba602befc07e910d2014b06bd9a01b304dc106b521614117311"} Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.205752 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef","Type":"ContainerStarted","Data":"cc944272f3bf55a43805641f79f279763a121f68121e0406b679b5aabbbeeef1"} Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.206200 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.217712 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b644788db-6g25v"] Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.232470 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5b644788db-6g25v"] Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.245630 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" podStartSLOduration=2.874504965 podStartE2EDuration="13.245593966s" podCreationTimestamp="2026-02-19 19:46:34 +0000 UTC" firstStartedPulling="2026-02-19 19:46:35.680222761 +0000 UTC m=+1663.470888703" lastFinishedPulling="2026-02-19 19:46:46.051311762 +0000 UTC m=+1673.841977704" observedRunningTime="2026-02-19 19:46:47.220371273 +0000 UTC m=+1675.011037215" watchObservedRunningTime="2026-02-19 19:46:47.245593966 +0000 UTC m=+1675.036259908" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.252331 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.252308646 podStartE2EDuration="48.252308646s" podCreationTimestamp="2026-02-19 19:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:47.241910852 +0000 UTC m=+1675.032576794" watchObservedRunningTime="2026-02-19 19:46:47.252308646 +0000 UTC m=+1675.042974588" Feb 19 19:46:47 crc kubenswrapper[4787]: I0219 19:46:47.268025 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.268000599 podStartE2EDuration="48.268000599s" podCreationTimestamp="2026-02-19 19:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:46:47.259271522 +0000 UTC m=+1675.049937474" watchObservedRunningTime="2026-02-19 19:46:47.268000599 +0000 UTC m=+1675.058666551" Feb 19 19:46:48 crc kubenswrapper[4787]: I0219 19:46:48.902863 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" path="/var/lib/kubelet/pods/10c7b3fe-a75d-45bd-ba12-9a801a77798e/volumes" Feb 19 19:46:50 crc kubenswrapper[4787]: I0219 19:46:50.324912 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 19:46:53 crc kubenswrapper[4787]: I0219 19:46:53.441122 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 19:46:54 crc kubenswrapper[4787]: I0219 19:46:54.310929 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bpfk6" event={"ID":"b28778f5-042a-4379-b738-86ef9f31a6df","Type":"ContainerStarted","Data":"9920d194475cf3325ff71e36f0b05d0e5d583bcdc33a14daeffdeca311cc5616"} Feb 19 19:46:54 crc kubenswrapper[4787]: I0219 19:46:54.336674 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-bpfk6" podStartSLOduration=7.428323137 podStartE2EDuration="14.336648996s" podCreationTimestamp="2026-02-19 19:46:40 +0000 UTC" firstStartedPulling="2026-02-19 19:46:46.529863499 +0000 UTC m=+1674.320529441" lastFinishedPulling="2026-02-19 19:46:53.438189358 +0000 UTC m=+1681.228855300" observedRunningTime="2026-02-19 19:46:54.327737544 +0000 UTC m=+1682.118403476" watchObservedRunningTime="2026-02-19 19:46:54.336648996 +0000 UTC m=+1682.127314938" Feb 19 19:46:57 crc kubenswrapper[4787]: I0219 19:46:57.349833 4787 generic.go:334] "Generic (PLEG): container finished" podID="b28778f5-042a-4379-b738-86ef9f31a6df" containerID="9920d194475cf3325ff71e36f0b05d0e5d583bcdc33a14daeffdeca311cc5616" exitCode=0 Feb 19 19:46:57 crc kubenswrapper[4787]: I0219 19:46:57.349961 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bpfk6" event={"ID":"b28778f5-042a-4379-b738-86ef9f31a6df","Type":"ContainerDied","Data":"9920d194475cf3325ff71e36f0b05d0e5d583bcdc33a14daeffdeca311cc5616"} Feb 19 19:46:57 crc kubenswrapper[4787]: I0219 19:46:57.892321 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:46:57 crc kubenswrapper[4787]: E0219 19:46:57.892982 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:46:58 crc kubenswrapper[4787]: I0219 19:46:58.363750 4787 generic.go:334] "Generic (PLEG): container finished" podID="17844a3d-7feb-457b-8e01-f38398e34b63" containerID="3a2cc412a6ebcf3255660e9acb2b6a85eccf26d088d1fe0bdf02724cbe523cdc" exitCode=0 Feb 19 19:46:58 crc kubenswrapper[4787]: I0219 19:46:58.364509 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" event={"ID":"17844a3d-7feb-457b-8e01-f38398e34b63","Type":"ContainerDied","Data":"3a2cc412a6ebcf3255660e9acb2b6a85eccf26d088d1fe0bdf02724cbe523cdc"} Feb 19 19:46:58 crc kubenswrapper[4787]: I0219 19:46:58.874865 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.006937 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-combined-ca-bundle\") pod \"b28778f5-042a-4379-b738-86ef9f31a6df\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.007016 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-config-data\") pod \"b28778f5-042a-4379-b738-86ef9f31a6df\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.007162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-scripts\") pod \"b28778f5-042a-4379-b738-86ef9f31a6df\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.007389 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8xqf\" (UniqueName: \"kubernetes.io/projected/b28778f5-042a-4379-b738-86ef9f31a6df-kube-api-access-w8xqf\") pod \"b28778f5-042a-4379-b738-86ef9f31a6df\" (UID: \"b28778f5-042a-4379-b738-86ef9f31a6df\") " Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.012213 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28778f5-042a-4379-b738-86ef9f31a6df-kube-api-access-w8xqf" (OuterVolumeSpecName: "kube-api-access-w8xqf") pod "b28778f5-042a-4379-b738-86ef9f31a6df" (UID: "b28778f5-042a-4379-b738-86ef9f31a6df"). InnerVolumeSpecName "kube-api-access-w8xqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.012622 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-scripts" (OuterVolumeSpecName: "scripts") pod "b28778f5-042a-4379-b738-86ef9f31a6df" (UID: "b28778f5-042a-4379-b738-86ef9f31a6df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.039128 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-config-data" (OuterVolumeSpecName: "config-data") pod "b28778f5-042a-4379-b738-86ef9f31a6df" (UID: "b28778f5-042a-4379-b738-86ef9f31a6df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.048272 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b28778f5-042a-4379-b738-86ef9f31a6df" (UID: "b28778f5-042a-4379-b738-86ef9f31a6df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.110031 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8xqf\" (UniqueName: \"kubernetes.io/projected/b28778f5-042a-4379-b738-86ef9f31a6df-kube-api-access-w8xqf\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.110061 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.110070 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.110080 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b28778f5-042a-4379-b738-86ef9f31a6df-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.378866 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bpfk6" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.378962 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bpfk6" event={"ID":"b28778f5-042a-4379-b738-86ef9f31a6df","Type":"ContainerDied","Data":"bc7387b6a2e35ba602befc07e910d2014b06bd9a01b304dc106b521614117311"} Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.379004 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7387b6a2e35ba602befc07e910d2014b06bd9a01b304dc106b521614117311" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.508746 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.866344 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 19 19:46:59 crc kubenswrapper[4787]: I0219 19:46:59.940747 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.138743 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.139009 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-api" containerID="cri-o://fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85" gracePeriod=30 Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.139092 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-listener" containerID="cri-o://4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49" gracePeriod=30 Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.139165 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-notifier" containerID="cri-o://0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769" gracePeriod=30 Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.139172 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-evaluator" containerID="cri-o://a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639" gracePeriod=30 Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.170705 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.238352 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam\") pod \"17844a3d-7feb-457b-8e01-f38398e34b63\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.238401 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-repo-setup-combined-ca-bundle\") pod \"17844a3d-7feb-457b-8e01-f38398e34b63\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.238477 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4jjl\" (UniqueName: \"kubernetes.io/projected/17844a3d-7feb-457b-8e01-f38398e34b63-kube-api-access-c4jjl\") pod \"17844a3d-7feb-457b-8e01-f38398e34b63\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.238688 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-inventory\") pod \"17844a3d-7feb-457b-8e01-f38398e34b63\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.244584 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17844a3d-7feb-457b-8e01-f38398e34b63-kube-api-access-c4jjl" (OuterVolumeSpecName: "kube-api-access-c4jjl") pod "17844a3d-7feb-457b-8e01-f38398e34b63" (UID: "17844a3d-7feb-457b-8e01-f38398e34b63"). InnerVolumeSpecName "kube-api-access-c4jjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.245705 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "17844a3d-7feb-457b-8e01-f38398e34b63" (UID: "17844a3d-7feb-457b-8e01-f38398e34b63"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.341541 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "17844a3d-7feb-457b-8e01-f38398e34b63" (UID: "17844a3d-7feb-457b-8e01-f38398e34b63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.342087 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam\") pod \"17844a3d-7feb-457b-8e01-f38398e34b63\" (UID: \"17844a3d-7feb-457b-8e01-f38398e34b63\") " Feb 19 19:47:00 crc kubenswrapper[4787]: W0219 19:47:00.342232 4787 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/17844a3d-7feb-457b-8e01-f38398e34b63/volumes/kubernetes.io~secret/ssh-key-openstack-edpm-ipam Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.342246 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "17844a3d-7feb-457b-8e01-f38398e34b63" (UID: "17844a3d-7feb-457b-8e01-f38398e34b63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.342766 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.342785 4787 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.342800 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4jjl\" (UniqueName: \"kubernetes.io/projected/17844a3d-7feb-457b-8e01-f38398e34b63-kube-api-access-c4jjl\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.343893 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-inventory" (OuterVolumeSpecName: "inventory") pod "17844a3d-7feb-457b-8e01-f38398e34b63" (UID: "17844a3d-7feb-457b-8e01-f38398e34b63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.397535 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" event={"ID":"17844a3d-7feb-457b-8e01-f38398e34b63","Type":"ContainerDied","Data":"6c7d22643321760daa37dd2d5dfb6edb6a9236e85a5bb095f7ce77a576b5026f"} Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.397581 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7d22643321760daa37dd2d5dfb6edb6a9236e85a5bb095f7ce77a576b5026f" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.397669 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.444691 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17844a3d-7feb-457b-8e01-f38398e34b63-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.494924 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5"] Feb 19 19:47:00 crc kubenswrapper[4787]: E0219 19:47:00.495462 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28778f5-042a-4379-b738-86ef9f31a6df" containerName="aodh-db-sync" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.495475 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28778f5-042a-4379-b738-86ef9f31a6df" containerName="aodh-db-sync" Feb 19 19:47:00 crc kubenswrapper[4787]: E0219 19:47:00.495510 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17844a3d-7feb-457b-8e01-f38398e34b63" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.495519 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="17844a3d-7feb-457b-8e01-f38398e34b63" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 19:47:00 crc kubenswrapper[4787]: E0219 19:47:00.495529 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" containerName="heat-engine" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.495535 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" containerName="heat-engine" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.495779 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28778f5-042a-4379-b738-86ef9f31a6df" containerName="aodh-db-sync" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.495801 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c7b3fe-a75d-45bd-ba12-9a801a77798e" containerName="heat-engine" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.495823 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="17844a3d-7feb-457b-8e01-f38398e34b63" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.496628 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.501496 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.501747 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.501905 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.502031 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.506042 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5"] Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.549926 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.549981 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2dn\" (UniqueName: \"kubernetes.io/projected/290f63d5-112d-49e1-ade3-47e3a699dee7-kube-api-access-bz2dn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.550145 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.652599 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.652774 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.652812 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz2dn\" (UniqueName: \"kubernetes.io/projected/290f63d5-112d-49e1-ade3-47e3a699dee7-kube-api-access-bz2dn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.657116 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.659146 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.669698 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz2dn\" (UniqueName: \"kubernetes.io/projected/290f63d5-112d-49e1-ade3-47e3a699dee7-kube-api-access-bz2dn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mw8x5\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.842283 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:00 crc kubenswrapper[4787]: I0219 19:47:00.915731 4787 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podea956f9c-808d-4a82-88e9-83cc34c223c2"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podea956f9c-808d-4a82-88e9-83cc34c223c2] : Timed out while waiting for systemd to remove kubepods-besteffort-podea956f9c_808d_4a82_88e9_83cc34c223c2.slice" Feb 19 19:47:00 crc kubenswrapper[4787]: E0219 19:47:00.915775 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podea956f9c-808d-4a82-88e9-83cc34c223c2] : unable to destroy cgroup paths for cgroup [kubepods besteffort podea956f9c-808d-4a82-88e9-83cc34c223c2] : Timed out while waiting for systemd to remove kubepods-besteffort-podea956f9c_808d_4a82_88e9_83cc34c223c2.slice" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.410597 4787 generic.go:334] "Generic (PLEG): container finished" podID="70954191-d761-4466-8f3d-2e60d61d19de" containerID="a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639" exitCode=0 Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.410983 4787 generic.go:334] "Generic (PLEG): container finished" podID="70954191-d761-4466-8f3d-2e60d61d19de" containerID="fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85" exitCode=0 Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.410688 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerDied","Data":"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639"} Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.411059 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-k45lq" Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.411082 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerDied","Data":"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85"} Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.457618 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-k45lq"] Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.464307 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-k45lq"] Feb 19 19:47:01 crc kubenswrapper[4787]: I0219 19:47:01.492367 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5"] Feb 19 19:47:02 crc kubenswrapper[4787]: I0219 19:47:02.432072 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" event={"ID":"290f63d5-112d-49e1-ade3-47e3a699dee7","Type":"ContainerStarted","Data":"2d2940a5350a8ef929246ff0b6a8dcd2556146726f46a4fdfdec493dadd0189e"} Feb 19 19:47:02 crc kubenswrapper[4787]: I0219 19:47:02.432472 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" event={"ID":"290f63d5-112d-49e1-ade3-47e3a699dee7","Type":"ContainerStarted","Data":"eaff741fc5a057473478ff28041655de1b3ee6c8ba0262957c31433e36e6eab9"} Feb 19 19:47:02 crc kubenswrapper[4787]: I0219 19:47:02.452567 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" podStartSLOduration=2.016519921 podStartE2EDuration="2.452539007s" podCreationTimestamp="2026-02-19 19:47:00 +0000 UTC" firstStartedPulling="2026-02-19 19:47:01.502556076 +0000 UTC m=+1689.293222018" lastFinishedPulling="2026-02-19 19:47:01.938575162 +0000 UTC m=+1689.729241104" observedRunningTime="2026-02-19 19:47:02.450271632 +0000 UTC m=+1690.240937574" watchObservedRunningTime="2026-02-19 19:47:02.452539007 +0000 UTC m=+1690.243204949" Feb 19 19:47:02 crc kubenswrapper[4787]: I0219 19:47:02.957549 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea956f9c-808d-4a82-88e9-83cc34c223c2" path="/var/lib/kubelet/pods/ea956f9c-808d-4a82-88e9-83cc34c223c2/volumes" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.290962 4787 scope.go:117] "RemoveContainer" containerID="ff61d5e424a7aaa77f090d13c53e03d63779546af46704b7c03c1958840db54f" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.350067 4787 scope.go:117] "RemoveContainer" containerID="b981e8e43871a722314887a58a1a921805ea8edc9b175836152b862fcf86e7c3" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.408591 4787 scope.go:117] "RemoveContainer" containerID="409d0d347d59a437026f283198fe2e5aeafaf1b69b9ca6360c526effd05789dc" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.410678 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="rabbitmq" containerID="cri-o://b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4" gracePeriod=604796 Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.500415 4787 scope.go:117] "RemoveContainer" containerID="02ff1da193c93f126feece115773dd2392f6318ce735af54242365046cdaebba" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.526121 4787 scope.go:117] "RemoveContainer" containerID="c0993ea1a2352afb9355a6094f9897074e0eda9fc01f547ad3ac86144eb9370d" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.584725 4787 scope.go:117] "RemoveContainer" containerID="07746b23ec7ee77675eb24404beadeb71529a82afdac8aa4c22ed541dcd36713" Feb 19 19:47:04 crc kubenswrapper[4787]: I0219 19:47:04.645323 4787 scope.go:117] "RemoveContainer" containerID="68530f4faae8bf8c7243c2403909396a037e76649b77f5938368393ac3e98a50" Feb 19 19:47:05 crc kubenswrapper[4787]: I0219 19:47:05.492776 4787 generic.go:334] "Generic (PLEG): container finished" podID="290f63d5-112d-49e1-ade3-47e3a699dee7" containerID="2d2940a5350a8ef929246ff0b6a8dcd2556146726f46a4fdfdec493dadd0189e" exitCode=0 Feb 19 19:47:05 crc kubenswrapper[4787]: I0219 19:47:05.492870 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" event={"ID":"290f63d5-112d-49e1-ade3-47e3a699dee7","Type":"ContainerDied","Data":"2d2940a5350a8ef929246ff0b6a8dcd2556146726f46a4fdfdec493dadd0189e"} Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.087743 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.198943 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-scripts\") pod \"70954191-d761-4466-8f3d-2e60d61d19de\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.199034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-combined-ca-bundle\") pod \"70954191-d761-4466-8f3d-2e60d61d19de\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.199190 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-internal-tls-certs\") pod \"70954191-d761-4466-8f3d-2e60d61d19de\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.199335 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-public-tls-certs\") pod \"70954191-d761-4466-8f3d-2e60d61d19de\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.199417 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d2jc\" (UniqueName: \"kubernetes.io/projected/70954191-d761-4466-8f3d-2e60d61d19de-kube-api-access-7d2jc\") pod \"70954191-d761-4466-8f3d-2e60d61d19de\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.199444 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-config-data\") pod \"70954191-d761-4466-8f3d-2e60d61d19de\" (UID: \"70954191-d761-4466-8f3d-2e60d61d19de\") " Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.205777 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70954191-d761-4466-8f3d-2e60d61d19de-kube-api-access-7d2jc" (OuterVolumeSpecName: "kube-api-access-7d2jc") pod "70954191-d761-4466-8f3d-2e60d61d19de" (UID: "70954191-d761-4466-8f3d-2e60d61d19de"). InnerVolumeSpecName "kube-api-access-7d2jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.214843 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-scripts" (OuterVolumeSpecName: "scripts") pod "70954191-d761-4466-8f3d-2e60d61d19de" (UID: "70954191-d761-4466-8f3d-2e60d61d19de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.268801 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70954191-d761-4466-8f3d-2e60d61d19de" (UID: "70954191-d761-4466-8f3d-2e60d61d19de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.275865 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "70954191-d761-4466-8f3d-2e60d61d19de" (UID: "70954191-d761-4466-8f3d-2e60d61d19de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.302037 4787 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.302070 4787 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.302079 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d2jc\" (UniqueName: \"kubernetes.io/projected/70954191-d761-4466-8f3d-2e60d61d19de-kube-api-access-7d2jc\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.302091 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.346269 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70954191-d761-4466-8f3d-2e60d61d19de" (UID: "70954191-d761-4466-8f3d-2e60d61d19de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.352757 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-config-data" (OuterVolumeSpecName: "config-data") pod "70954191-d761-4466-8f3d-2e60d61d19de" (UID: "70954191-d761-4466-8f3d-2e60d61d19de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.403948 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.404003 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70954191-d761-4466-8f3d-2e60d61d19de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.505901 4787 generic.go:334] "Generic (PLEG): container finished" podID="70954191-d761-4466-8f3d-2e60d61d19de" containerID="4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49" exitCode=0 Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.505939 4787 generic.go:334] "Generic (PLEG): container finished" podID="70954191-d761-4466-8f3d-2e60d61d19de" containerID="0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769" exitCode=0 Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.505952 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerDied","Data":"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49"} Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.506029 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerDied","Data":"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769"} Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.506042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"70954191-d761-4466-8f3d-2e60d61d19de","Type":"ContainerDied","Data":"ff67b771de5b592e02f414bd1bd28048bd1728567609e40a21a74b2bb919808d"} Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.506065 4787 scope.go:117] "RemoveContainer" containerID="4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.505977 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.547239 4787 scope.go:117] "RemoveContainer" containerID="0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.560214 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.634402 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.645800 4787 scope.go:117] "RemoveContainer" containerID="a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.649498 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.650298 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-notifier" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650329 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-notifier" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.650360 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-api" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650369 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-api" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.650390 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-evaluator" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650399 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-evaluator" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.650413 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-listener" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650421 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-listener" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650740 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-evaluator" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650797 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-api" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650830 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-listener" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.650888 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="70954191-d761-4466-8f3d-2e60d61d19de" containerName="aodh-notifier" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.656188 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.661010 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.663797 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.664185 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.664596 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.664803 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-f2j2m" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.664942 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.711211 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-public-tls-certs\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.711375 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-config-data\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.711462 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-scripts\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.711794 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.711957 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zz8\" (UniqueName: \"kubernetes.io/projected/d546d784-5037-48eb-96dd-e48dfe765bd4-kube-api-access-s4zz8\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.712033 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-internal-tls-certs\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.752380 4787 scope.go:117] "RemoveContainer" containerID="fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.782869 4787 scope.go:117] "RemoveContainer" containerID="4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.783299 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49\": container with ID starting with 4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49 not found: ID does not exist" containerID="4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.783347 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49"} err="failed to get container status \"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49\": rpc error: code = NotFound desc = could not find container \"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49\": container with ID starting with 4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.783374 4787 scope.go:117] "RemoveContainer" containerID="0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.783597 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769\": container with ID starting with 0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769 not found: ID does not exist" containerID="0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.783629 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769"} err="failed to get container status \"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769\": rpc error: code = NotFound desc = could not find container \"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769\": container with ID starting with 0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.783644 4787 scope.go:117] "RemoveContainer" containerID="a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.783976 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639\": container with ID starting with a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639 not found: ID does not exist" containerID="a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.783997 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639"} err="failed to get container status \"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639\": rpc error: code = NotFound desc = could not find container \"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639\": container with ID starting with a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.784009 4787 scope.go:117] "RemoveContainer" containerID="fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85" Feb 19 19:47:06 crc kubenswrapper[4787]: E0219 19:47:06.784249 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85\": container with ID starting with fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85 not found: ID does not exist" containerID="fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.784266 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85"} err="failed to get container status \"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85\": rpc error: code = NotFound desc = could not find container \"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85\": container with ID starting with fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.784276 4787 scope.go:117] "RemoveContainer" containerID="4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.784821 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49"} err="failed to get container status \"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49\": rpc error: code = NotFound desc = could not find container \"4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49\": container with ID starting with 4320f6e5cec1214adf8735410919204550a429ecbc5c7296ce967fb763407e49 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.784860 4787 scope.go:117] "RemoveContainer" containerID="0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.785477 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769"} err="failed to get container status \"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769\": rpc error: code = NotFound desc = could not find container \"0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769\": container with ID starting with 0eeeca367ba6b0698f49fd153d1a4b89255e9ee6e14862a43c7349cb09342769 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.785502 4787 scope.go:117] "RemoveContainer" containerID="a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.785892 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639"} err="failed to get container status \"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639\": rpc error: code = NotFound desc = could not find container \"a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639\": container with ID starting with a4ef2f985ee6898c1b98a7bc85bde8fd37c56f2585c897f79719ce1972e69639 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.785912 4787 scope.go:117] "RemoveContainer" containerID="fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.786067 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85"} err="failed to get container status \"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85\": rpc error: code = NotFound desc = could not find container \"fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85\": container with ID starting with fdf50c489096317affd60dc9262789c6cc8bececbbdce40da76431a583b4ba85 not found: ID does not exist" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.814260 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-public-tls-certs\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.814356 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-config-data\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.814399 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-scripts\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.814424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.814481 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zz8\" (UniqueName: \"kubernetes.io/projected/d546d784-5037-48eb-96dd-e48dfe765bd4-kube-api-access-s4zz8\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.814523 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-internal-tls-certs\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.818236 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-scripts\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.818523 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-internal-tls-certs\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.818549 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-public-tls-certs\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.832824 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-combined-ca-bundle\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.835664 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d546d784-5037-48eb-96dd-e48dfe765bd4-config-data\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.836073 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zz8\" (UniqueName: \"kubernetes.io/projected/d546d784-5037-48eb-96dd-e48dfe765bd4-kube-api-access-s4zz8\") pod \"aodh-0\" (UID: \"d546d784-5037-48eb-96dd-e48dfe765bd4\") " pod="openstack/aodh-0" Feb 19 19:47:06 crc kubenswrapper[4787]: I0219 19:47:06.914814 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70954191-d761-4466-8f3d-2e60d61d19de" path="/var/lib/kubelet/pods/70954191-d761-4466-8f3d-2e60d61d19de/volumes" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.054815 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.094284 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.223345 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-ssh-key-openstack-edpm-ipam\") pod \"290f63d5-112d-49e1-ade3-47e3a699dee7\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.223637 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-inventory\") pod \"290f63d5-112d-49e1-ade3-47e3a699dee7\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.223694 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz2dn\" (UniqueName: \"kubernetes.io/projected/290f63d5-112d-49e1-ade3-47e3a699dee7-kube-api-access-bz2dn\") pod \"290f63d5-112d-49e1-ade3-47e3a699dee7\" (UID: \"290f63d5-112d-49e1-ade3-47e3a699dee7\") " Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.230976 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290f63d5-112d-49e1-ade3-47e3a699dee7-kube-api-access-bz2dn" (OuterVolumeSpecName: "kube-api-access-bz2dn") pod "290f63d5-112d-49e1-ade3-47e3a699dee7" (UID: "290f63d5-112d-49e1-ade3-47e3a699dee7"). InnerVolumeSpecName "kube-api-access-bz2dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.266894 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "290f63d5-112d-49e1-ade3-47e3a699dee7" (UID: "290f63d5-112d-49e1-ade3-47e3a699dee7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.274591 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-inventory" (OuterVolumeSpecName: "inventory") pod "290f63d5-112d-49e1-ade3-47e3a699dee7" (UID: "290f63d5-112d-49e1-ade3-47e3a699dee7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.326200 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.326232 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f63d5-112d-49e1-ade3-47e3a699dee7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.326241 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz2dn\" (UniqueName: \"kubernetes.io/projected/290f63d5-112d-49e1-ade3-47e3a699dee7-kube-api-access-bz2dn\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.522638 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" event={"ID":"290f63d5-112d-49e1-ade3-47e3a699dee7","Type":"ContainerDied","Data":"eaff741fc5a057473478ff28041655de1b3ee6c8ba0262957c31433e36e6eab9"} Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.522685 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaff741fc5a057473478ff28041655de1b3ee6c8ba0262957c31433e36e6eab9" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.522738 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mw8x5" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.591633 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.638988 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6"] Feb 19 19:47:07 crc kubenswrapper[4787]: E0219 19:47:07.639777 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290f63d5-112d-49e1-ade3-47e3a699dee7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.639861 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="290f63d5-112d-49e1-ade3-47e3a699dee7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.640292 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="290f63d5-112d-49e1-ade3-47e3a699dee7" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.641273 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.644555 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.644800 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.646260 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.646281 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.656110 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6"] Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.736389 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.736707 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.736931 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwflt\" (UniqueName: \"kubernetes.io/projected/b03f276c-b1cd-46aa-ac07-69221b9d6684-kube-api-access-jwflt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.737212 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.839993 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.840298 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.840444 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwflt\" (UniqueName: \"kubernetes.io/projected/b03f276c-b1cd-46aa-ac07-69221b9d6684-kube-api-access-jwflt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.840574 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.844739 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.844957 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.845361 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:07 crc kubenswrapper[4787]: I0219 19:47:07.858202 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwflt\" (UniqueName: \"kubernetes.io/projected/b03f276c-b1cd-46aa-ac07-69221b9d6684-kube-api-access-jwflt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:08 crc kubenswrapper[4787]: I0219 19:47:08.022982 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:47:08 crc kubenswrapper[4787]: I0219 19:47:08.540793 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d546d784-5037-48eb-96dd-e48dfe765bd4","Type":"ContainerStarted","Data":"dc93e46a49a00587ff790020709aaf05fd6df61ae7a46972bf4b69b582b57027"} Feb 19 19:47:08 crc kubenswrapper[4787]: I0219 19:47:08.541466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d546d784-5037-48eb-96dd-e48dfe765bd4","Type":"ContainerStarted","Data":"5a4b88e0b872a8336f59896e2a759dcaf2083700d37d26bbfb4a72cfe03bb606"} Feb 19 19:47:08 crc kubenswrapper[4787]: I0219 19:47:08.575151 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6"] Feb 19 19:47:08 crc kubenswrapper[4787]: W0219 19:47:08.577769 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb03f276c_b1cd_46aa_ac07_69221b9d6684.slice/crio-e5d820563e0a1bf29027c88f24fc00cdd4278077aa5a0c68c1bb73a8aef59135 WatchSource:0}: Error finding container e5d820563e0a1bf29027c88f24fc00cdd4278077aa5a0c68c1bb73a8aef59135: Status 404 returned error can't find the container with id e5d820563e0a1bf29027c88f24fc00cdd4278077aa5a0c68c1bb73a8aef59135 Feb 19 19:47:09 crc kubenswrapper[4787]: I0219 19:47:09.302326 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 19 19:47:09 crc kubenswrapper[4787]: I0219 19:47:09.554958 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d546d784-5037-48eb-96dd-e48dfe765bd4","Type":"ContainerStarted","Data":"e94a2f84c3bce7c754f672381edebb5a11ba94833d4819be44527ebc2b34151d"} Feb 19 19:47:09 crc kubenswrapper[4787]: I0219 19:47:09.557161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" event={"ID":"b03f276c-b1cd-46aa-ac07-69221b9d6684","Type":"ContainerStarted","Data":"94546ef213a7089de3fdc6d683542ac74e794f4218e1406d2400fddcbccb5943"} Feb 19 19:47:09 crc kubenswrapper[4787]: I0219 19:47:09.558299 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" event={"ID":"b03f276c-b1cd-46aa-ac07-69221b9d6684","Type":"ContainerStarted","Data":"e5d820563e0a1bf29027c88f24fc00cdd4278077aa5a0c68c1bb73a8aef59135"} Feb 19 19:47:09 crc kubenswrapper[4787]: I0219 19:47:09.594732 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" podStartSLOduration=2.18389928 podStartE2EDuration="2.594701902s" podCreationTimestamp="2026-02-19 19:47:07 +0000 UTC" firstStartedPulling="2026-02-19 19:47:08.580165981 +0000 UTC m=+1696.370831923" lastFinishedPulling="2026-02-19 19:47:08.990968603 +0000 UTC m=+1696.781634545" observedRunningTime="2026-02-19 19:47:09.574514219 +0000 UTC m=+1697.365180171" watchObservedRunningTime="2026-02-19 19:47:09.594701902 +0000 UTC m=+1697.385367854" Feb 19 19:47:10 crc kubenswrapper[4787]: I0219 19:47:10.575936 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d546d784-5037-48eb-96dd-e48dfe765bd4","Type":"ContainerStarted","Data":"c1a8c884c02ef7e0bcf231124098d9aff2e47a6ae44fd0add06abb3584ded9ec"} Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.063337 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.124002 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-server-conf\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.124420 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-confd\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.128700 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.128766 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-plugins-conf\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.128790 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-erlang-cookie\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.128866 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14da78cc-cd10-440d-9983-6e80d45f3e31-pod-info\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.128898 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gbz\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-kube-api-access-v6gbz\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.128915 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-tls\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.129056 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14da78cc-cd10-440d-9983-6e80d45f3e31-erlang-cookie-secret\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.129089 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-plugins\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.129125 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-config-data\") pod \"14da78cc-cd10-440d-9983-6e80d45f3e31\" (UID: \"14da78cc-cd10-440d-9983-6e80d45f3e31\") " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.135004 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/14da78cc-cd10-440d-9983-6e80d45f3e31-pod-info" (OuterVolumeSpecName: "pod-info") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.137965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.138278 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.139697 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.139964 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.149035 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-kube-api-access-v6gbz" (OuterVolumeSpecName: "kube-api-access-v6gbz") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "kube-api-access-v6gbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.165310 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14da78cc-cd10-440d-9983-6e80d45f3e31-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.185890 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed" (OuterVolumeSpecName: "persistence") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.224488 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-config-data" (OuterVolumeSpecName: "config-data") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233264 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") on node \"crc\" " Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233302 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233313 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233322 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14da78cc-cd10-440d-9983-6e80d45f3e31-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233333 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gbz\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-kube-api-access-v6gbz\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233342 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233350 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14da78cc-cd10-440d-9983-6e80d45f3e31-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233361 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.233369 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.252726 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-server-conf" (OuterVolumeSpecName: "server-conf") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.282150 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.284424 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed") on node "crc" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.310121 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "14da78cc-cd10-440d-9983-6e80d45f3e31" (UID: "14da78cc-cd10-440d-9983-6e80d45f3e31"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.335325 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14da78cc-cd10-440d-9983-6e80d45f3e31-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.335357 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14da78cc-cd10-440d-9983-6e80d45f3e31-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.335373 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") on node \"crc\" DevicePath \"\"" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.600832 4787 generic.go:334] "Generic (PLEG): container finished" podID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerID="b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4" exitCode=0 Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.600899 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14da78cc-cd10-440d-9983-6e80d45f3e31","Type":"ContainerDied","Data":"b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4"} Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.600938 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"14da78cc-cd10-440d-9983-6e80d45f3e31","Type":"ContainerDied","Data":"4de50483fd2fab0a7fe80161a0b54c5cbd81f51b87aa1eb5e4fa06b45f45ec78"} Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.600977 4787 scope.go:117] "RemoveContainer" containerID="b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.600958 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.646655 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.666561 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.681540 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:47:11 crc kubenswrapper[4787]: E0219 19:47:11.682203 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="setup-container" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.682220 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="setup-container" Feb 19 19:47:11 crc kubenswrapper[4787]: E0219 19:47:11.682242 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="rabbitmq" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.682249 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="rabbitmq" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.682496 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" containerName="rabbitmq" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.683920 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.696771 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749467 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749546 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27cg2\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-kube-api-access-27cg2\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749628 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb22ceb0-4965-4d12-9950-93feeb6876e9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749698 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749750 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749796 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.749813 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-config-data\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.750231 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.750336 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb22ceb0-4965-4d12-9950-93feeb6876e9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.750411 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.750504 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.819100 4787 scope.go:117] "RemoveContainer" containerID="d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853106 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853192 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27cg2\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-kube-api-access-27cg2\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853238 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb22ceb0-4965-4d12-9950-93feeb6876e9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853280 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853340 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853389 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853416 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-config-data\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853485 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853519 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb22ceb0-4965-4d12-9950-93feeb6876e9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853548 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853586 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.853779 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.854037 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.854290 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-config-data\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.858512 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.859081 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb22ceb0-4965-4d12-9950-93feeb6876e9-server-conf\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.862082 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.862109 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f93760bfdbe999d7fc2ff672e7708eaae49a08043ada3581238c277596fdb17c/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.862191 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb22ceb0-4965-4d12-9950-93feeb6876e9-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.887254 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb22ceb0-4965-4d12-9950-93feeb6876e9-pod-info\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.887356 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.890357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.892221 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:47:11 crc kubenswrapper[4787]: E0219 19:47:11.892549 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.906307 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27cg2\" (UniqueName: \"kubernetes.io/projected/eb22ceb0-4965-4d12-9950-93feeb6876e9-kube-api-access-27cg2\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.919168 4787 scope.go:117] "RemoveContainer" containerID="b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4" Feb 19 19:47:11 crc kubenswrapper[4787]: E0219 19:47:11.927212 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4\": container with ID starting with b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4 not found: ID does not exist" containerID="b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.927261 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4"} err="failed to get container status \"b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4\": rpc error: code = NotFound desc = could not find container \"b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4\": container with ID starting with b4cc41bc84774c926e95e0aff3e699e7d38c9475175f5e9a2f52cdfcf8bbe5d4 not found: ID does not exist" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.927294 4787 scope.go:117] "RemoveContainer" containerID="d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9" Feb 19 19:47:11 crc kubenswrapper[4787]: E0219 19:47:11.929744 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9\": container with ID starting with d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9 not found: ID does not exist" containerID="d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.929775 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9"} err="failed to get container status \"d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9\": rpc error: code = NotFound desc = could not find container \"d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9\": container with ID starting with d4d24273723645ecaed89741689f62576251587cfc141a209fef8a1e8b6a3ab9 not found: ID does not exist" Feb 19 19:47:11 crc kubenswrapper[4787]: I0219 19:47:11.970820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1bf05d5f-98c8-4b94-85e7-535f712fafed\") pod \"rabbitmq-server-1\" (UID: \"eb22ceb0-4965-4d12-9950-93feeb6876e9\") " pod="openstack/rabbitmq-server-1" Feb 19 19:47:12 crc kubenswrapper[4787]: I0219 19:47:12.014362 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 19 19:47:12 crc kubenswrapper[4787]: I0219 19:47:12.537838 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 19 19:47:12 crc kubenswrapper[4787]: W0219 19:47:12.554651 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb22ceb0_4965_4d12_9950_93feeb6876e9.slice/crio-85aa7e80d209d32c1dc7ccaa53b5fda4092589d67dbd3650b463d7279ec47204 WatchSource:0}: Error finding container 85aa7e80d209d32c1dc7ccaa53b5fda4092589d67dbd3650b463d7279ec47204: Status 404 returned error can't find the container with id 85aa7e80d209d32c1dc7ccaa53b5fda4092589d67dbd3650b463d7279ec47204 Feb 19 19:47:12 crc kubenswrapper[4787]: I0219 19:47:12.673869 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"d546d784-5037-48eb-96dd-e48dfe765bd4","Type":"ContainerStarted","Data":"654a9cba35c4e30aebea2baf6ba504a7f74e7ae85c13ff8e90bd6292d10760b6"} Feb 19 19:47:12 crc kubenswrapper[4787]: I0219 19:47:12.683312 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"eb22ceb0-4965-4d12-9950-93feeb6876e9","Type":"ContainerStarted","Data":"85aa7e80d209d32c1dc7ccaa53b5fda4092589d67dbd3650b463d7279ec47204"} Feb 19 19:47:12 crc kubenswrapper[4787]: I0219 19:47:12.715862 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.337330521 podStartE2EDuration="6.715844249s" podCreationTimestamp="2026-02-19 19:47:06 +0000 UTC" firstStartedPulling="2026-02-19 19:47:07.558655334 +0000 UTC m=+1695.349321266" lastFinishedPulling="2026-02-19 19:47:11.937169052 +0000 UTC m=+1699.727834994" observedRunningTime="2026-02-19 19:47:12.709036756 +0000 UTC m=+1700.499702698" watchObservedRunningTime="2026-02-19 19:47:12.715844249 +0000 UTC m=+1700.506510191" Feb 19 19:47:12 crc kubenswrapper[4787]: I0219 19:47:12.912716 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14da78cc-cd10-440d-9983-6e80d45f3e31" path="/var/lib/kubelet/pods/14da78cc-cd10-440d-9983-6e80d45f3e31/volumes" Feb 19 19:47:14 crc kubenswrapper[4787]: I0219 19:47:14.709581 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"eb22ceb0-4965-4d12-9950-93feeb6876e9","Type":"ContainerStarted","Data":"bdb6cab67b4d61e03baf7200734f911b5b9e6600b57809c43b26547398698d25"} Feb 19 19:47:25 crc kubenswrapper[4787]: I0219 19:47:25.892834 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:47:25 crc kubenswrapper[4787]: E0219 19:47:25.893590 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:47:37 crc kubenswrapper[4787]: I0219 19:47:37.891691 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:47:37 crc kubenswrapper[4787]: E0219 19:47:37.893727 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:47:47 crc kubenswrapper[4787]: I0219 19:47:47.076802 4787 generic.go:334] "Generic (PLEG): container finished" podID="eb22ceb0-4965-4d12-9950-93feeb6876e9" containerID="bdb6cab67b4d61e03baf7200734f911b5b9e6600b57809c43b26547398698d25" exitCode=0 Feb 19 19:47:47 crc kubenswrapper[4787]: I0219 19:47:47.076878 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"eb22ceb0-4965-4d12-9950-93feeb6876e9","Type":"ContainerDied","Data":"bdb6cab67b4d61e03baf7200734f911b5b9e6600b57809c43b26547398698d25"} Feb 19 19:47:48 crc kubenswrapper[4787]: I0219 19:47:48.089107 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"eb22ceb0-4965-4d12-9950-93feeb6876e9","Type":"ContainerStarted","Data":"c0399a6188d10c1e4d32062a807172842a8325a968302b53000271937b369417"} Feb 19 19:47:48 crc kubenswrapper[4787]: I0219 19:47:48.089635 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 19 19:47:48 crc kubenswrapper[4787]: I0219 19:47:48.126889 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.126870158 podStartE2EDuration="37.126870158s" podCreationTimestamp="2026-02-19 19:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:47:48.107811688 +0000 UTC m=+1735.898477640" watchObservedRunningTime="2026-02-19 19:47:48.126870158 +0000 UTC m=+1735.917536100" Feb 19 19:47:50 crc kubenswrapper[4787]: I0219 19:47:50.892987 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:47:50 crc kubenswrapper[4787]: E0219 19:47:50.894317 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:48:02 crc kubenswrapper[4787]: I0219 19:48:02.016577 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 19 19:48:02 crc kubenswrapper[4787]: I0219 19:48:02.074059 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:48:02 crc kubenswrapper[4787]: I0219 19:48:02.950397 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:48:02 crc kubenswrapper[4787]: E0219 19:48:02.950712 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:48:04 crc kubenswrapper[4787]: I0219 19:48:04.940973 4787 scope.go:117] "RemoveContainer" containerID="5979d1a1099c4a81c967c497327717b7b09b5c839e7246d48b1eac97d1b029ab" Feb 19 19:48:04 crc kubenswrapper[4787]: I0219 19:48:04.973408 4787 scope.go:117] "RemoveContainer" containerID="0b3552000e24191580cb9823e21078b530500afc71d67610a52addf7fe81f487" Feb 19 19:48:05 crc kubenswrapper[4787]: I0219 19:48:05.024775 4787 scope.go:117] "RemoveContainer" containerID="e6a3556b647212ecfff605de992c2e6d0ae9e331c50fd5eef779e2bf272fdd11" Feb 19 19:48:05 crc kubenswrapper[4787]: I0219 19:48:05.099530 4787 scope.go:117] "RemoveContainer" containerID="225778b6b22ebbc805da1e42b1f565f5dce58b743e9e399720a5eb33961c6c0a" Feb 19 19:48:05 crc kubenswrapper[4787]: I0219 19:48:05.128564 4787 scope.go:117] "RemoveContainer" containerID="ec051585bc6a7852fa175077fb4d757ce922e6fdd160df98d35fcec5b0f477a7" Feb 19 19:48:05 crc kubenswrapper[4787]: I0219 19:48:05.193830 4787 scope.go:117] "RemoveContainer" containerID="3e767471a7e41ea35ca98aa52aa978c8979e879e75ecf04adb8f351dd6e50b50" Feb 19 19:48:06 crc kubenswrapper[4787]: I0219 19:48:06.499972 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="rabbitmq" containerID="cri-o://9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17" gracePeriod=604796 Feb 19 19:48:08 crc kubenswrapper[4787]: I0219 19:48:08.958656 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.197166 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.358346 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-erlang-cookie\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.358706 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-plugins-conf\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.358762 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80458aec-a844-4f4d-b618-56bdc811cd43-erlang-cookie-secret\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.358944 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-tls\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.359019 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-confd\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.359868 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.360314 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.360381 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80458aec-a844-4f4d-b618-56bdc811cd43-pod-info\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.360413 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-plugins\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.360500 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-config-data\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.360538 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h96zn\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-kube-api-access-h96zn\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.360571 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-server-conf\") pod \"80458aec-a844-4f4d-b618-56bdc811cd43\" (UID: \"80458aec-a844-4f4d-b618-56bdc811cd43\") " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.361048 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.361755 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.361770 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.361743 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.366112 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80458aec-a844-4f4d-b618-56bdc811cd43-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.374408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-kube-api-access-h96zn" (OuterVolumeSpecName: "kube-api-access-h96zn") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "kube-api-access-h96zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.374661 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.381023 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/80458aec-a844-4f4d-b618-56bdc811cd43-pod-info" (OuterVolumeSpecName: "pod-info") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.389066 4787 generic.go:334] "Generic (PLEG): container finished" podID="80458aec-a844-4f4d-b618-56bdc811cd43" containerID="9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17" exitCode=0 Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.389127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80458aec-a844-4f4d-b618-56bdc811cd43","Type":"ContainerDied","Data":"9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17"} Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.389341 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80458aec-a844-4f4d-b618-56bdc811cd43","Type":"ContainerDied","Data":"ea204586da409cf37d9664ca91fd23dea40bbd80fdf91a3c284cd713802920b5"} Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.389366 4787 scope.go:117] "RemoveContainer" containerID="9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.389539 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.399253 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-config-data" (OuterVolumeSpecName: "config-data") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.403519 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8" (OuterVolumeSpecName: "persistence") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "pvc-0d63f000-175b-428f-b847-dd0eab99c1e8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464551 4787 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464585 4787 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80458aec-a844-4f4d-b618-56bdc811cd43-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464624 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464659 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") on node \"crc\" " Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464674 4787 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80458aec-a844-4f4d-b618-56bdc811cd43-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464688 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.464712 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h96zn\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-kube-api-access-h96zn\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.483239 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-server-conf" (OuterVolumeSpecName: "server-conf") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.508676 4787 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.508827 4787 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d63f000-175b-428f-b847-dd0eab99c1e8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8") on node "crc" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.514771 4787 scope.go:117] "RemoveContainer" containerID="b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.541384 4787 scope.go:117] "RemoveContainer" containerID="9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17" Feb 19 19:48:13 crc kubenswrapper[4787]: E0219 19:48:13.541927 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17\": container with ID starting with 9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17 not found: ID does not exist" containerID="9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.541966 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17"} err="failed to get container status \"9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17\": rpc error: code = NotFound desc = could not find container \"9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17\": container with ID starting with 9b15945550a013918e2a8d40692b8b4329308a89fa60b434186f75bcff7e9d17 not found: ID does not exist" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.541992 4787 scope.go:117] "RemoveContainer" containerID="b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8" Feb 19 19:48:13 crc kubenswrapper[4787]: E0219 19:48:13.542407 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8\": container with ID starting with b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8 not found: ID does not exist" containerID="b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.542445 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8"} err="failed to get container status \"b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8\": rpc error: code = NotFound desc = could not find container \"b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8\": container with ID starting with b9967bb6cffb8c09cf98f779a86480fc4a12686e202d13567d8f807840581de8 not found: ID does not exist" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.542562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "80458aec-a844-4f4d-b618-56bdc811cd43" (UID: "80458aec-a844-4f4d-b618-56bdc811cd43"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.567422 4787 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80458aec-a844-4f4d-b618-56bdc811cd43-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.567452 4787 reconciler_common.go:293] "Volume detached for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.567464 4787 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80458aec-a844-4f4d-b618-56bdc811cd43-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.785236 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.803028 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.824975 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:48:13 crc kubenswrapper[4787]: E0219 19:48:13.825457 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="setup-container" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.825475 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="setup-container" Feb 19 19:48:13 crc kubenswrapper[4787]: E0219 19:48:13.825507 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="rabbitmq" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.825513 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="rabbitmq" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.825760 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" containerName="rabbitmq" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.827065 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.873340 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.977100 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.977766 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.977876 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978129 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978157 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrpcx\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-kube-api-access-lrpcx\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978239 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978438 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978661 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978687 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978771 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:13 crc kubenswrapper[4787]: I0219 19:48:13.978828 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.081174 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.081223 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrpcx\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-kube-api-access-lrpcx\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.081267 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.081325 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.081352 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.081370 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.082177 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.082246 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.082325 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.082390 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.082803 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.083395 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.083561 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.083940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.084009 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.084082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.085761 4787 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.085790 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8d6ea76abc031ad8be5f46b0b6e594bd6a7a032377b43b3a75a511904a039abd/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.085945 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.086783 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.088767 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.090607 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.113246 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrpcx\" (UniqueName: \"kubernetes.io/projected/3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3-kube-api-access-lrpcx\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.188155 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d63f000-175b-428f-b847-dd0eab99c1e8\") pod \"rabbitmq-server-0\" (UID: \"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3\") " pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.484068 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 19:48:14 crc kubenswrapper[4787]: I0219 19:48:14.904791 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80458aec-a844-4f4d-b618-56bdc811cd43" path="/var/lib/kubelet/pods/80458aec-a844-4f4d-b618-56bdc811cd43/volumes" Feb 19 19:48:15 crc kubenswrapper[4787]: I0219 19:48:15.027664 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 19:48:15 crc kubenswrapper[4787]: I0219 19:48:15.411921 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3","Type":"ContainerStarted","Data":"55cb6823a2b44ae642e6eaa1fb7cdb6b82ab605c89abb0261b5f7efa578820ae"} Feb 19 19:48:15 crc kubenswrapper[4787]: I0219 19:48:15.891767 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:48:15 crc kubenswrapper[4787]: E0219 19:48:15.892385 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:48:17 crc kubenswrapper[4787]: I0219 19:48:17.462555 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3","Type":"ContainerStarted","Data":"b7c400780e3d35d8fb91f788bd15ef500915a8a8da67fa73d7168322222b3823"} Feb 19 19:48:26 crc kubenswrapper[4787]: I0219 19:48:26.892425 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:48:26 crc kubenswrapper[4787]: E0219 19:48:26.893234 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:48:41 crc kubenswrapper[4787]: I0219 19:48:41.892040 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:48:42 crc kubenswrapper[4787]: I0219 19:48:42.744365 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"b28d37fcd770cc8ddbfa968a92e9da36d6c302db5f51061d56875fb7273ab058"} Feb 19 19:48:48 crc kubenswrapper[4787]: I0219 19:48:48.812014 4787 generic.go:334] "Generic (PLEG): container finished" podID="3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3" containerID="b7c400780e3d35d8fb91f788bd15ef500915a8a8da67fa73d7168322222b3823" exitCode=0 Feb 19 19:48:48 crc kubenswrapper[4787]: I0219 19:48:48.812239 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3","Type":"ContainerDied","Data":"b7c400780e3d35d8fb91f788bd15ef500915a8a8da67fa73d7168322222b3823"} Feb 19 19:48:49 crc kubenswrapper[4787]: I0219 19:48:49.875782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3","Type":"ContainerStarted","Data":"d6e9c13792d6e97f166b7bb6e99cbfe723c1e4fecf8cc851b9f1ca9adaf707d7"} Feb 19 19:48:49 crc kubenswrapper[4787]: I0219 19:48:49.876480 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 19:48:49 crc kubenswrapper[4787]: I0219 19:48:49.909935 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.909915145 podStartE2EDuration="36.909915145s" podCreationTimestamp="2026-02-19 19:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 19:48:49.895928849 +0000 UTC m=+1797.686594781" watchObservedRunningTime="2026-02-19 19:48:49.909915145 +0000 UTC m=+1797.700581087" Feb 19 19:49:04 crc kubenswrapper[4787]: I0219 19:49:04.486784 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 19:49:05 crc kubenswrapper[4787]: I0219 19:49:05.384123 4787 scope.go:117] "RemoveContainer" containerID="24932b80b6180589a6091c22c3fc19891c39c9b71cf53c6d9a163204e319e37d" Feb 19 19:49:05 crc kubenswrapper[4787]: I0219 19:49:05.425689 4787 scope.go:117] "RemoveContainer" containerID="ef05c65af8b89c300162675ad69f1b96fcfb66e2b584a833182454318e094a50" Feb 19 19:49:05 crc kubenswrapper[4787]: I0219 19:49:05.452134 4787 scope.go:117] "RemoveContainer" containerID="7db0d4879dbf46df1e3eebe19d9243e12faefeee2673963e808755605369381b" Feb 19 19:49:05 crc kubenswrapper[4787]: I0219 19:49:05.479910 4787 scope.go:117] "RemoveContainer" containerID="316f748abb9cd2c9d046d36021373b0b4f50ff202826fd07ad9f33691ff41a5e" Feb 19 19:49:05 crc kubenswrapper[4787]: I0219 19:49:05.505985 4787 scope.go:117] "RemoveContainer" containerID="901ec92c5a513ba6e9efa10c261a83d2c7fc16d115735ecae610aee037c4a01a" Feb 19 19:49:57 crc kubenswrapper[4787]: E0219 19:49:57.899416 4787 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.007s" Feb 19 19:50:05 crc kubenswrapper[4787]: I0219 19:50:05.615724 4787 scope.go:117] "RemoveContainer" containerID="ee7b46f20435feaaa0a9eb42494f632259041444c1a75a21e2cfa3cdf95387ac" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.065383 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ec9b-account-create-update-x8rkj"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.078001 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0257-account-create-update-m9bdf"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.090867 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-n4sbd"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.102704 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0257-account-create-update-m9bdf"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.112967 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ec9b-account-create-update-x8rkj"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.123136 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0b91-account-create-update-npwhw"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.134561 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-241f-account-create-update-bphqx"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.146324 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5z6xh"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.157699 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-n4sbd"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.170743 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7jdz5"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.182249 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v6kj2"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.194880 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0b91-account-create-update-npwhw"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.205139 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-241f-account-create-update-bphqx"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.214923 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5z6xh"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.224376 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7jdz5"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.234649 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v6kj2"] Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.908098 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe5aa4b-4d38-4918-adc4-b507bb6f1317" path="/var/lib/kubelet/pods/6fe5aa4b-4d38-4918-adc4-b507bb6f1317/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.910843 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c20b79d-45cd-476a-9309-d7850a869dd8" path="/var/lib/kubelet/pods/7c20b79d-45cd-476a-9309-d7850a869dd8/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.912055 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9002f19b-58a4-49df-9f61-945f8bca211e" path="/var/lib/kubelet/pods/9002f19b-58a4-49df-9f61-945f8bca211e/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.913550 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dd860c-7fed-409a-92ca-374370b9e80f" path="/var/lib/kubelet/pods/98dd860c-7fed-409a-92ca-374370b9e80f/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.916910 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80f1688-cd91-4c7e-a26a-20763f129b82" path="/var/lib/kubelet/pods/a80f1688-cd91-4c7e-a26a-20763f129b82/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.917544 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d74f59-7434-4c00-8097-59f873601963" path="/var/lib/kubelet/pods/c3d74f59-7434-4c00-8097-59f873601963/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.918547 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c490b914-c022-49de-a191-891ca4991459" path="/var/lib/kubelet/pods/c490b914-c022-49de-a191-891ca4991459/volumes" Feb 19 19:50:14 crc kubenswrapper[4787]: I0219 19:50:14.919916 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7643078-1dbe-4a7e-9dee-7a4886d87d5e" path="/var/lib/kubelet/pods/f7643078-1dbe-4a7e-9dee-7a4886d87d5e/volumes" Feb 19 19:50:20 crc kubenswrapper[4787]: I0219 19:50:20.036979 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-c19d-account-create-update-fhvjl"] Feb 19 19:50:20 crc kubenswrapper[4787]: I0219 19:50:20.052537 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6"] Feb 19 19:50:20 crc kubenswrapper[4787]: I0219 19:50:20.065537 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-c19d-account-create-update-fhvjl"] Feb 19 19:50:20 crc kubenswrapper[4787]: I0219 19:50:20.075355 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-5l6s6"] Feb 19 19:50:20 crc kubenswrapper[4787]: I0219 19:50:20.907939 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16996041-3a8b-4ed9-a1f6-830900b59b28" path="/var/lib/kubelet/pods/16996041-3a8b-4ed9-a1f6-830900b59b28/volumes" Feb 19 19:50:20 crc kubenswrapper[4787]: I0219 19:50:20.928501 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77aba799-1dab-40af-a989-33dc600ad004" path="/var/lib/kubelet/pods/77aba799-1dab-40af-a989-33dc600ad004/volumes" Feb 19 19:50:32 crc kubenswrapper[4787]: I0219 19:50:32.346763 4787 generic.go:334] "Generic (PLEG): container finished" podID="b03f276c-b1cd-46aa-ac07-69221b9d6684" containerID="94546ef213a7089de3fdc6d683542ac74e794f4218e1406d2400fddcbccb5943" exitCode=0 Feb 19 19:50:32 crc kubenswrapper[4787]: I0219 19:50:32.346844 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" event={"ID":"b03f276c-b1cd-46aa-ac07-69221b9d6684","Type":"ContainerDied","Data":"94546ef213a7089de3fdc6d683542ac74e794f4218e1406d2400fddcbccb5943"} Feb 19 19:50:33 crc kubenswrapper[4787]: I0219 19:50:33.881433 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.004355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwflt\" (UniqueName: \"kubernetes.io/projected/b03f276c-b1cd-46aa-ac07-69221b9d6684-kube-api-access-jwflt\") pod \"b03f276c-b1cd-46aa-ac07-69221b9d6684\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.004556 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-bootstrap-combined-ca-bundle\") pod \"b03f276c-b1cd-46aa-ac07-69221b9d6684\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.004599 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-ssh-key-openstack-edpm-ipam\") pod \"b03f276c-b1cd-46aa-ac07-69221b9d6684\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.004797 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-inventory\") pod \"b03f276c-b1cd-46aa-ac07-69221b9d6684\" (UID: \"b03f276c-b1cd-46aa-ac07-69221b9d6684\") " Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.010307 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b03f276c-b1cd-46aa-ac07-69221b9d6684" (UID: "b03f276c-b1cd-46aa-ac07-69221b9d6684"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.011497 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03f276c-b1cd-46aa-ac07-69221b9d6684-kube-api-access-jwflt" (OuterVolumeSpecName: "kube-api-access-jwflt") pod "b03f276c-b1cd-46aa-ac07-69221b9d6684" (UID: "b03f276c-b1cd-46aa-ac07-69221b9d6684"). InnerVolumeSpecName "kube-api-access-jwflt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.038889 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b03f276c-b1cd-46aa-ac07-69221b9d6684" (UID: "b03f276c-b1cd-46aa-ac07-69221b9d6684"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.042173 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-inventory" (OuterVolumeSpecName: "inventory") pod "b03f276c-b1cd-46aa-ac07-69221b9d6684" (UID: "b03f276c-b1cd-46aa-ac07-69221b9d6684"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.108725 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwflt\" (UniqueName: \"kubernetes.io/projected/b03f276c-b1cd-46aa-ac07-69221b9d6684-kube-api-access-jwflt\") on node \"crc\" DevicePath \"\"" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.108775 4787 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.108786 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.108796 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b03f276c-b1cd-46aa-ac07-69221b9d6684-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.372171 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" event={"ID":"b03f276c-b1cd-46aa-ac07-69221b9d6684","Type":"ContainerDied","Data":"e5d820563e0a1bf29027c88f24fc00cdd4278077aa5a0c68c1bb73a8aef59135"} Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.372211 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d820563e0a1bf29027c88f24fc00cdd4278077aa5a0c68c1bb73a8aef59135" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.372272 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.461348 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2"] Feb 19 19:50:34 crc kubenswrapper[4787]: E0219 19:50:34.462166 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03f276c-b1cd-46aa-ac07-69221b9d6684" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.462276 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03f276c-b1cd-46aa-ac07-69221b9d6684" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.462660 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03f276c-b1cd-46aa-ac07-69221b9d6684" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.463906 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.466832 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.467174 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.467367 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.467513 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.478133 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2"] Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.620895 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.620969 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22zp\" (UniqueName: \"kubernetes.io/projected/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-kube-api-access-v22zp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.621037 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.722790 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.722863 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v22zp\" (UniqueName: \"kubernetes.io/projected/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-kube-api-access-v22zp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.722924 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.730367 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.730376 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.740627 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v22zp\" (UniqueName: \"kubernetes.io/projected/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-kube-api-access-v22zp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:34 crc kubenswrapper[4787]: I0219 19:50:34.786135 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:50:35 crc kubenswrapper[4787]: I0219 19:50:35.344846 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2"] Feb 19 19:50:35 crc kubenswrapper[4787]: I0219 19:50:35.345334 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:50:35 crc kubenswrapper[4787]: I0219 19:50:35.384202 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" event={"ID":"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca","Type":"ContainerStarted","Data":"87753b241c5682c1baa5620f42838d64b64d1a9878ee3243ff972263ada530f2"} Feb 19 19:50:36 crc kubenswrapper[4787]: I0219 19:50:36.401134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" event={"ID":"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca","Type":"ContainerStarted","Data":"709d423c59b027abdf176824bb3ae0de90ed468f46972f460fe311d84536f967"} Feb 19 19:50:36 crc kubenswrapper[4787]: I0219 19:50:36.428695 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" podStartSLOduration=1.9377363600000002 podStartE2EDuration="2.428673179s" podCreationTimestamp="2026-02-19 19:50:34 +0000 UTC" firstStartedPulling="2026-02-19 19:50:35.345086384 +0000 UTC m=+1903.135752326" lastFinishedPulling="2026-02-19 19:50:35.836023203 +0000 UTC m=+1903.626689145" observedRunningTime="2026-02-19 19:50:36.427847676 +0000 UTC m=+1904.218513638" watchObservedRunningTime="2026-02-19 19:50:36.428673179 +0000 UTC m=+1904.219339121" Feb 19 19:50:37 crc kubenswrapper[4787]: I0219 19:50:37.049030 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qhkfm"] Feb 19 19:50:37 crc kubenswrapper[4787]: I0219 19:50:37.064482 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qhkfm"] Feb 19 19:50:38 crc kubenswrapper[4787]: I0219 19:50:38.909227 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9057a20e-ff30-454d-8c86-d42f7543571e" path="/var/lib/kubelet/pods/9057a20e-ff30-454d-8c86-d42f7543571e/volumes" Feb 19 19:50:44 crc kubenswrapper[4787]: I0219 19:50:44.030565 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-b84c5"] Feb 19 19:50:44 crc kubenswrapper[4787]: I0219 19:50:44.041827 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-b84c5"] Feb 19 19:50:44 crc kubenswrapper[4787]: I0219 19:50:44.911867 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b3acff-ad59-4a99-9d0c-bf1a5f94b570" path="/var/lib/kubelet/pods/b1b3acff-ad59-4a99-9d0c-bf1a5f94b570/volumes" Feb 19 19:50:49 crc kubenswrapper[4787]: I0219 19:50:49.030723 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-8z4ns"] Feb 19 19:50:49 crc kubenswrapper[4787]: I0219 19:50:49.041527 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-8z4ns"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.045309 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-045b-account-create-update-nntlf"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.056794 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mpb5r"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.069422 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-57ea-account-create-update-tvq42"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.081019 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-045b-account-create-update-nntlf"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.091630 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-sq5bc"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.102101 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-v8g95"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.111539 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mpb5r"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.121174 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-v8g95"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.132065 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-sq5bc"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.142702 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-57ea-account-create-update-tvq42"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.152093 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ca57-account-create-update-6vtmg"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.161695 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-870d-account-create-update-pdvj4"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.171120 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-870d-account-create-update-pdvj4"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.181087 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ca57-account-create-update-6vtmg"] Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.906121 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33472d78-2ff8-4741-bfa6-c85d46fa60ae" path="/var/lib/kubelet/pods/33472d78-2ff8-4741-bfa6-c85d46fa60ae/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.907708 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c92c68-c226-47e8-b01a-3946123ed402" path="/var/lib/kubelet/pods/58c92c68-c226-47e8-b01a-3946123ed402/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.909946 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6c7de8-26a7-41ce-a452-6d392be91fe6" path="/var/lib/kubelet/pods/6f6c7de8-26a7-41ce-a452-6d392be91fe6/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.910768 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7747ed1a-72b3-4273-baf1-f34f1ab95760" path="/var/lib/kubelet/pods/7747ed1a-72b3-4273-baf1-f34f1ab95760/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.912513 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afce4d4f-f308-4581-be34-e782d95c89f3" path="/var/lib/kubelet/pods/afce4d4f-f308-4581-be34-e782d95c89f3/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.915537 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49278e3-c0b6-4bb6-ab6f-49386ea52e68" path="/var/lib/kubelet/pods/b49278e3-c0b6-4bb6-ab6f-49386ea52e68/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.917728 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b660e059-e520-4aa9-898b-28346b096b31" path="/var/lib/kubelet/pods/b660e059-e520-4aa9-898b-28346b096b31/volumes" Feb 19 19:50:50 crc kubenswrapper[4787]: I0219 19:50:50.920775 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c292df44-db14-4c45-8bd5-a0bd2da5e92f" path="/var/lib/kubelet/pods/c292df44-db14-4c45-8bd5-a0bd2da5e92f/volumes" Feb 19 19:50:59 crc kubenswrapper[4787]: I0219 19:50:59.043479 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4nk2w"] Feb 19 19:50:59 crc kubenswrapper[4787]: I0219 19:50:59.054747 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4nk2w"] Feb 19 19:51:00 crc kubenswrapper[4787]: I0219 19:51:00.904476 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fced1c8-edcc-41a6-a703-3bde87073a5f" path="/var/lib/kubelet/pods/3fced1c8-edcc-41a6-a703-3bde87073a5f/volumes" Feb 19 19:51:05 crc kubenswrapper[4787]: I0219 19:51:05.716646 4787 scope.go:117] "RemoveContainer" containerID="54b6aa1c70b995b9d69eeb780a78f5194893cbdcc04fb0d0b4d5a6a1c88345e4" Feb 19 19:51:05 crc kubenswrapper[4787]: I0219 19:51:05.754093 4787 scope.go:117] "RemoveContainer" containerID="ace57fb0950953aaada5a8dcb6dddd51c9298dd43f3eb6bd0f613c2216375d6e" Feb 19 19:51:05 crc kubenswrapper[4787]: I0219 19:51:05.821172 4787 scope.go:117] "RemoveContainer" containerID="b327f806b4e58662b7ebd479b911d436d4675e9161b0f355f8cb2b5757532bee" Feb 19 19:51:05 crc kubenswrapper[4787]: I0219 19:51:05.874218 4787 scope.go:117] "RemoveContainer" containerID="63a41a953039c4e18c887aa0fb11d2421e0d8d9354cf360211cef74722cfd23c" Feb 19 19:51:05 crc kubenswrapper[4787]: I0219 19:51:05.925718 4787 scope.go:117] "RemoveContainer" containerID="b85ce4c95590a9895fe602d29a9caed333ac4eb3d6ac07a6f92b1cd7f6bf1897" Feb 19 19:51:05 crc kubenswrapper[4787]: I0219 19:51:05.969871 4787 scope.go:117] "RemoveContainer" containerID="c8e2b718d9eec8289cee23d7aa4a8e36b33681c5662a93628a54bb6ff883adbe" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.015591 4787 scope.go:117] "RemoveContainer" containerID="b717cb85ef34c4a64d1fcb71ca9df3beb560457b3bc948f32e40f62ef259adba" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.038129 4787 scope.go:117] "RemoveContainer" containerID="a95101f941e35ce54be29a333b8d0a91280d2cbeb629aa7ee05745eda176161e" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.123411 4787 scope.go:117] "RemoveContainer" containerID="6b5adf351050950dca29614aae3d2c86313c170d249e35e9e44a92515111eabb" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.144804 4787 scope.go:117] "RemoveContainer" containerID="c6ff3aa2a11131309e7d1f067abb69465ff674a9b54695cc115ddb857042c6e1" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.167097 4787 scope.go:117] "RemoveContainer" containerID="4eea30e1c2d57cab6dca8c1e9ed34e82fd906aa930c93182260e4bc8ff45e3f8" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.186697 4787 scope.go:117] "RemoveContainer" containerID="5c63885e26900562745a0667ee283ccce29e2a329a5d04ef8fabbf69f749ed04" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.207422 4787 scope.go:117] "RemoveContainer" containerID="d3423e1226e06f4573726ab8ca612b6c0235f68c2f0a34f1ae0403bb72f78bfb" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.244524 4787 scope.go:117] "RemoveContainer" containerID="48c2d6b9f5b18fcbc64df3b76adcf82a4aa359f70e2b2f4cb8b49ce7710e9143" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.267376 4787 scope.go:117] "RemoveContainer" containerID="1192d4fd488d23b4bfab3ae1cfcfdce4076e77fce5686d2eb40581fd54134742" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.289133 4787 scope.go:117] "RemoveContainer" containerID="7b1e3469562215544ae0097304081ccd61ff6c619113732310803bdc2375541c" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.310839 4787 scope.go:117] "RemoveContainer" containerID="12478eb7e60d554dbb626e89b4b83cc71c4eb55bf543241a2fde7f633eeb7755" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.334298 4787 scope.go:117] "RemoveContainer" containerID="5688eafa5b22d308de45b985431d8601c72dc3bd89291e3abe7e3fceb6601dd2" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.354406 4787 scope.go:117] "RemoveContainer" containerID="d7d318e32b475778800a7c552987b9dc4e743167f2f27dee8573440d8c16907b" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.376567 4787 scope.go:117] "RemoveContainer" containerID="59934b945fa6636b78af9a63d1847ebf90d7b3c4c8ca0d593840a0e445b82055" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.398151 4787 scope.go:117] "RemoveContainer" containerID="7313a99dbe4e49fced36e6c34ec33b98c51b21c36aeca38beaf09aa6c20a1859" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.419584 4787 scope.go:117] "RemoveContainer" containerID="8e9604f51670be8682449de31c23da3c385366cb8bbe204fd72dd368c297d4cc" Feb 19 19:51:06 crc kubenswrapper[4787]: I0219 19:51:06.452350 4787 scope.go:117] "RemoveContainer" containerID="42426968039e8481053b059b66d234c35c11587ffb047931659dcb74db84ee8b" Feb 19 19:51:09 crc kubenswrapper[4787]: I0219 19:51:09.263744 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:51:09 crc kubenswrapper[4787]: I0219 19:51:09.264094 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:51:33 crc kubenswrapper[4787]: I0219 19:51:33.044562 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qdlxg"] Feb 19 19:51:33 crc kubenswrapper[4787]: I0219 19:51:33.057643 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qdlxg"] Feb 19 19:51:34 crc kubenswrapper[4787]: I0219 19:51:34.905099 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e917a58d-1e20-4326-a9a1-44c35c41636c" path="/var/lib/kubelet/pods/e917a58d-1e20-4326-a9a1-44c35c41636c/volumes" Feb 19 19:51:39 crc kubenswrapper[4787]: I0219 19:51:39.263753 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:51:39 crc kubenswrapper[4787]: I0219 19:51:39.264808 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:51:44 crc kubenswrapper[4787]: I0219 19:51:44.058685 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-x77cx"] Feb 19 19:51:44 crc kubenswrapper[4787]: I0219 19:51:44.081552 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ktn47"] Feb 19 19:51:44 crc kubenswrapper[4787]: I0219 19:51:44.091359 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-x77cx"] Feb 19 19:51:44 crc kubenswrapper[4787]: I0219 19:51:44.100984 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ktn47"] Feb 19 19:51:44 crc kubenswrapper[4787]: I0219 19:51:44.905388 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c2ea68-2415-4d8d-88eb-3cf18c4eda8d" path="/var/lib/kubelet/pods/20c2ea68-2415-4d8d-88eb-3cf18c4eda8d/volumes" Feb 19 19:51:44 crc kubenswrapper[4787]: I0219 19:51:44.908365 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf1edd0-4024-4d19-be88-bd8f001052d8" path="/var/lib/kubelet/pods/dbf1edd0-4024-4d19-be88-bd8f001052d8/volumes" Feb 19 19:51:55 crc kubenswrapper[4787]: I0219 19:51:55.063044 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7vnc9"] Feb 19 19:51:55 crc kubenswrapper[4787]: I0219 19:51:55.077721 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7vnc9"] Feb 19 19:51:56 crc kubenswrapper[4787]: I0219 19:51:56.909274 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3100990-268a-4c84-8e81-ba54457b771b" path="/var/lib/kubelet/pods/c3100990-268a-4c84-8e81-ba54457b771b/volumes" Feb 19 19:52:00 crc kubenswrapper[4787]: I0219 19:52:00.032708 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vlnf7"] Feb 19 19:52:00 crc kubenswrapper[4787]: I0219 19:52:00.044360 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vlnf7"] Feb 19 19:52:00 crc kubenswrapper[4787]: I0219 19:52:00.909885 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5aa867-a7f1-4a64-a8cd-d515fb1e210d" path="/var/lib/kubelet/pods/0a5aa867-a7f1-4a64-a8cd-d515fb1e210d/volumes" Feb 19 19:52:06 crc kubenswrapper[4787]: I0219 19:52:06.901172 4787 scope.go:117] "RemoveContainer" containerID="c318eb17cbaba566429d882ec0de4a7fe9adefdfb63c306717b0a0d4a73897cc" Feb 19 19:52:06 crc kubenswrapper[4787]: I0219 19:52:06.937720 4787 scope.go:117] "RemoveContainer" containerID="3f298000d71cb4c816726423190729260496dab182211a05859d7ba65f6cd853" Feb 19 19:52:06 crc kubenswrapper[4787]: I0219 19:52:06.986419 4787 scope.go:117] "RemoveContainer" containerID="1aad18d1fb84f4db9bb63102e3b11da22fdc18ae9e63d1cf56ccc5b2be39ab53" Feb 19 19:52:07 crc kubenswrapper[4787]: I0219 19:52:07.021125 4787 scope.go:117] "RemoveContainer" containerID="8c93defefa4d9b3c69242ed2499139e38c0f1a1e427eb0d613b65075084e8f0c" Feb 19 19:52:07 crc kubenswrapper[4787]: I0219 19:52:07.082550 4787 scope.go:117] "RemoveContainer" containerID="dfe8dca608ea0e3d0aa8a872d7bb42c345d6d1babe546ff08c4757eb7c5a638e" Feb 19 19:52:07 crc kubenswrapper[4787]: I0219 19:52:07.136315 4787 scope.go:117] "RemoveContainer" containerID="c77ab1ada1a9a0a622de8eb8061f0820eb39c64157f3191a406aa33a87ff1af4" Feb 19 19:52:07 crc kubenswrapper[4787]: I0219 19:52:07.186451 4787 scope.go:117] "RemoveContainer" containerID="ade3a529923d763b53fa405a05c7b4f2f5e17f97a837df7a9e128d9fb7c36d40" Feb 19 19:52:07 crc kubenswrapper[4787]: I0219 19:52:07.237125 4787 scope.go:117] "RemoveContainer" containerID="33c5329e15100f521363d042f0452651923f43c508a25503baff1a8ed5c2315e" Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.263489 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.265268 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.265529 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.266946 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b28d37fcd770cc8ddbfa968a92e9da36d6c302db5f51061d56875fb7273ab058"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.267219 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://b28d37fcd770cc8ddbfa968a92e9da36d6c302db5f51061d56875fb7273ab058" gracePeriod=600 Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.470197 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="b28d37fcd770cc8ddbfa968a92e9da36d6c302db5f51061d56875fb7273ab058" exitCode=0 Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.470280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"b28d37fcd770cc8ddbfa968a92e9da36d6c302db5f51061d56875fb7273ab058"} Feb 19 19:52:09 crc kubenswrapper[4787]: I0219 19:52:09.470485 4787 scope.go:117] "RemoveContainer" containerID="d8b6def14cfec15f3f2ca5ebfefb6474a3702d613971df28d4563386373b1edc" Feb 19 19:52:10 crc kubenswrapper[4787]: I0219 19:52:10.487835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a"} Feb 19 19:52:37 crc kubenswrapper[4787]: I0219 19:52:37.044497 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c1f3-account-create-update-2pwzh"] Feb 19 19:52:37 crc kubenswrapper[4787]: I0219 19:52:37.058659 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c1f3-account-create-update-2pwzh"] Feb 19 19:52:37 crc kubenswrapper[4787]: I0219 19:52:37.068883 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-crv9s"] Feb 19 19:52:37 crc kubenswrapper[4787]: I0219 19:52:37.078683 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-crv9s"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.036457 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0dd6-account-create-update-nqvtx"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.046809 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mfqxg"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.057436 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0dd6-account-create-update-nqvtx"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.068268 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3e63-account-create-update-nmrrq"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.078179 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-zxvrc"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.092008 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mfqxg"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.102371 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-zxvrc"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.113818 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3e63-account-create-update-nmrrq"] Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.912117 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c45f6a-d1bc-4584-9390-8b892bbbf384" path="/var/lib/kubelet/pods/22c45f6a-d1bc-4584-9390-8b892bbbf384/volumes" Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.913001 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e17e04-4491-4b18-a420-c4f3ceaa09f3" path="/var/lib/kubelet/pods/79e17e04-4491-4b18-a420-c4f3ceaa09f3/volumes" Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.913549 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dad9422-4bf0-478a-8b71-2892fe8ba113" path="/var/lib/kubelet/pods/7dad9422-4bf0-478a-8b71-2892fe8ba113/volumes" Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.925736 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49123d4-eb5d-4788-a066-9ca1addf8bf6" path="/var/lib/kubelet/pods/a49123d4-eb5d-4788-a066-9ca1addf8bf6/volumes" Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.926374 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc57fd5d-2a99-4ef3-b706-bfa09e570c5f" path="/var/lib/kubelet/pods/cc57fd5d-2a99-4ef3-b706-bfa09e570c5f/volumes" Feb 19 19:52:38 crc kubenswrapper[4787]: I0219 19:52:38.926955 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e798ac38-ff2d-4ed7-8008-b2869613cccb" path="/var/lib/kubelet/pods/e798ac38-ff2d-4ed7-8008-b2869613cccb/volumes" Feb 19 19:52:41 crc kubenswrapper[4787]: I0219 19:52:41.815569 4787 generic.go:334] "Generic (PLEG): container finished" podID="7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" containerID="709d423c59b027abdf176824bb3ae0de90ed468f46972f460fe311d84536f967" exitCode=0 Feb 19 19:52:41 crc kubenswrapper[4787]: I0219 19:52:41.815673 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" event={"ID":"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca","Type":"ContainerDied","Data":"709d423c59b027abdf176824bb3ae0de90ed468f46972f460fe311d84536f967"} Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.383742 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.550210 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-inventory\") pod \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.553175 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v22zp\" (UniqueName: \"kubernetes.io/projected/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-kube-api-access-v22zp\") pod \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.553245 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-ssh-key-openstack-edpm-ipam\") pod \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\" (UID: \"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca\") " Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.576651 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-kube-api-access-v22zp" (OuterVolumeSpecName: "kube-api-access-v22zp") pod "7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" (UID: "7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca"). InnerVolumeSpecName "kube-api-access-v22zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.587220 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" (UID: "7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.587756 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-inventory" (OuterVolumeSpecName: "inventory") pod "7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" (UID: "7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.665145 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v22zp\" (UniqueName: \"kubernetes.io/projected/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-kube-api-access-v22zp\") on node \"crc\" DevicePath \"\"" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.665189 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.665227 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.836260 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" event={"ID":"7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca","Type":"ContainerDied","Data":"87753b241c5682c1baa5620f42838d64b64d1a9878ee3243ff972263ada530f2"} Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.836313 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87753b241c5682c1baa5620f42838d64b64d1a9878ee3243ff972263ada530f2" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.836422 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.941180 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw"] Feb 19 19:52:43 crc kubenswrapper[4787]: E0219 19:52:43.941800 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.941826 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.942160 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.943187 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.949291 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.949388 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.949303 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.954465 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.963972 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw"] Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.974552 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.975003 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:43 crc kubenswrapper[4787]: I0219 19:52:43.975343 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pblk\" (UniqueName: \"kubernetes.io/projected/718c9653-d673-4f8b-bf7f-43d983bd9854-kube-api-access-9pblk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.045100 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-75ptd"] Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.048304 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.063148 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75ptd"] Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.077803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.077911 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pblk\" (UniqueName: \"kubernetes.io/projected/718c9653-d673-4f8b-bf7f-43d983bd9854-kube-api-access-9pblk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.077986 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dk5f\" (UniqueName: \"kubernetes.io/projected/ec8ee82b-813f-4680-ad0e-387bf877f97e-kube-api-access-5dk5f\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.078044 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-utilities\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.078085 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-catalog-content\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.078110 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.092813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.095852 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.099115 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pblk\" (UniqueName: \"kubernetes.io/projected/718c9653-d673-4f8b-bf7f-43d983bd9854-kube-api-access-9pblk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.179195 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dk5f\" (UniqueName: \"kubernetes.io/projected/ec8ee82b-813f-4680-ad0e-387bf877f97e-kube-api-access-5dk5f\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.179279 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-utilities\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.179321 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-catalog-content\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.179982 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-catalog-content\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.180052 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-utilities\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.197962 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dk5f\" (UniqueName: \"kubernetes.io/projected/ec8ee82b-813f-4680-ad0e-387bf877f97e-kube-api-access-5dk5f\") pod \"certified-operators-75ptd\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.266105 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.370518 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:44 crc kubenswrapper[4787]: I0219 19:52:44.959986 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75ptd"] Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.012236 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw"] Feb 19 19:52:45 crc kubenswrapper[4787]: W0219 19:52:45.028416 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718c9653_d673_4f8b_bf7f_43d983bd9854.slice/crio-6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2 WatchSource:0}: Error finding container 6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2: Status 404 returned error can't find the container with id 6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2 Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.857233 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" event={"ID":"718c9653-d673-4f8b-bf7f-43d983bd9854","Type":"ContainerStarted","Data":"377be7c0f1900e633ab0d269866be8ac6f65e8458dd095b2e2f94f75d9b8bb1d"} Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.857872 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" event={"ID":"718c9653-d673-4f8b-bf7f-43d983bd9854","Type":"ContainerStarted","Data":"6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2"} Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.859113 4787 generic.go:334] "Generic (PLEG): container finished" podID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerID="06dff87ce3690be9716e4f9f95229a2780f5c6382fa04d185b96d2ad646a799e" exitCode=0 Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.859147 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerDied","Data":"06dff87ce3690be9716e4f9f95229a2780f5c6382fa04d185b96d2ad646a799e"} Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.859167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerStarted","Data":"bdc45a149a67bf40211a8f84fab182e4cd7fc216153dea2249b3f0a8b3f25470"} Feb 19 19:52:45 crc kubenswrapper[4787]: I0219 19:52:45.874393 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" podStartSLOduration=2.4496089 podStartE2EDuration="2.874373202s" podCreationTimestamp="2026-02-19 19:52:43 +0000 UTC" firstStartedPulling="2026-02-19 19:52:45.031364472 +0000 UTC m=+2032.822030414" lastFinishedPulling="2026-02-19 19:52:45.456128774 +0000 UTC m=+2033.246794716" observedRunningTime="2026-02-19 19:52:45.871828909 +0000 UTC m=+2033.662494871" watchObservedRunningTime="2026-02-19 19:52:45.874373202 +0000 UTC m=+2033.665039144" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.365647 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nc294"] Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.368273 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.382775 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc294"] Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.534837 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn4fw\" (UniqueName: \"kubernetes.io/projected/5614031e-fc3a-4d6c-afc0-7268364d46b5-kube-api-access-pn4fw\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.534987 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-utilities\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.535266 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-catalog-content\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.637931 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-utilities\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.638035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-catalog-content\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.638172 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn4fw\" (UniqueName: \"kubernetes.io/projected/5614031e-fc3a-4d6c-afc0-7268364d46b5-kube-api-access-pn4fw\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.638430 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-utilities\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.638739 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-catalog-content\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.659962 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn4fw\" (UniqueName: \"kubernetes.io/projected/5614031e-fc3a-4d6c-afc0-7268364d46b5-kube-api-access-pn4fw\") pod \"redhat-operators-nc294\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:46 crc kubenswrapper[4787]: I0219 19:52:46.688889 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:52:47 crc kubenswrapper[4787]: I0219 19:52:47.315172 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc294"] Feb 19 19:52:47 crc kubenswrapper[4787]: I0219 19:52:47.913814 4787 generic.go:334] "Generic (PLEG): container finished" podID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerID="b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3" exitCode=0 Feb 19 19:52:47 crc kubenswrapper[4787]: I0219 19:52:47.913877 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerDied","Data":"b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3"} Feb 19 19:52:47 crc kubenswrapper[4787]: I0219 19:52:47.914064 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerStarted","Data":"b58dc37d03caacf2c9f3b73e2c2c3438f79175a0c0bf00bae9b72b1445be555a"} Feb 19 19:52:47 crc kubenswrapper[4787]: I0219 19:52:47.916245 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerStarted","Data":"4ad6bebd19c5fab1143b2f264173b17aa268154215c187b612d8eab271d51882"} Feb 19 19:52:48 crc kubenswrapper[4787]: I0219 19:52:48.929842 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerStarted","Data":"8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca"} Feb 19 19:52:49 crc kubenswrapper[4787]: I0219 19:52:49.942979 4787 generic.go:334] "Generic (PLEG): container finished" podID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerID="4ad6bebd19c5fab1143b2f264173b17aa268154215c187b612d8eab271d51882" exitCode=0 Feb 19 19:52:49 crc kubenswrapper[4787]: I0219 19:52:49.943045 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerDied","Data":"4ad6bebd19c5fab1143b2f264173b17aa268154215c187b612d8eab271d51882"} Feb 19 19:52:50 crc kubenswrapper[4787]: I0219 19:52:50.956847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerStarted","Data":"cbbe8cfb56c79ff3dc68d86eead317e2d59c3da703bc7aa9f14286c9b05bf9b9"} Feb 19 19:52:50 crc kubenswrapper[4787]: I0219 19:52:50.980810 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-75ptd" podStartSLOduration=2.490465953 podStartE2EDuration="6.980793731s" podCreationTimestamp="2026-02-19 19:52:44 +0000 UTC" firstStartedPulling="2026-02-19 19:52:45.860506408 +0000 UTC m=+2033.651172350" lastFinishedPulling="2026-02-19 19:52:50.350834186 +0000 UTC m=+2038.141500128" observedRunningTime="2026-02-19 19:52:50.976423117 +0000 UTC m=+2038.767089059" watchObservedRunningTime="2026-02-19 19:52:50.980793731 +0000 UTC m=+2038.771459673" Feb 19 19:52:54 crc kubenswrapper[4787]: I0219 19:52:54.371347 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:54 crc kubenswrapper[4787]: I0219 19:52:54.371966 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:52:55 crc kubenswrapper[4787]: I0219 19:52:55.424153 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-75ptd" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="registry-server" probeResult="failure" output=< Feb 19 19:52:55 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 19:52:55 crc kubenswrapper[4787]: > Feb 19 19:52:56 crc kubenswrapper[4787]: I0219 19:52:56.006624 4787 generic.go:334] "Generic (PLEG): container finished" podID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerID="8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca" exitCode=0 Feb 19 19:52:56 crc kubenswrapper[4787]: I0219 19:52:56.006636 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerDied","Data":"8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca"} Feb 19 19:52:57 crc kubenswrapper[4787]: I0219 19:52:57.022027 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerStarted","Data":"2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd"} Feb 19 19:52:57 crc kubenswrapper[4787]: I0219 19:52:57.050956 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nc294" podStartSLOduration=2.575851453 podStartE2EDuration="11.050932964s" podCreationTimestamp="2026-02-19 19:52:46 +0000 UTC" firstStartedPulling="2026-02-19 19:52:47.915743673 +0000 UTC m=+2035.706409615" lastFinishedPulling="2026-02-19 19:52:56.390825184 +0000 UTC m=+2044.181491126" observedRunningTime="2026-02-19 19:52:57.045265393 +0000 UTC m=+2044.835931345" watchObservedRunningTime="2026-02-19 19:52:57.050932964 +0000 UTC m=+2044.841598896" Feb 19 19:53:04 crc kubenswrapper[4787]: I0219 19:53:04.432807 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:53:04 crc kubenswrapper[4787]: I0219 19:53:04.495575 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:53:06 crc kubenswrapper[4787]: I0219 19:53:06.550900 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-75ptd"] Feb 19 19:53:06 crc kubenswrapper[4787]: I0219 19:53:06.551679 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-75ptd" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="registry-server" containerID="cri-o://cbbe8cfb56c79ff3dc68d86eead317e2d59c3da703bc7aa9f14286c9b05bf9b9" gracePeriod=2 Feb 19 19:53:06 crc kubenswrapper[4787]: I0219 19:53:06.689404 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:53:06 crc kubenswrapper[4787]: I0219 19:53:06.689474 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:53:06 crc kubenswrapper[4787]: I0219 19:53:06.749579 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.123923 4787 generic.go:334] "Generic (PLEG): container finished" podID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerID="cbbe8cfb56c79ff3dc68d86eead317e2d59c3da703bc7aa9f14286c9b05bf9b9" exitCode=0 Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.123993 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerDied","Data":"cbbe8cfb56c79ff3dc68d86eead317e2d59c3da703bc7aa9f14286c9b05bf9b9"} Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.124307 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75ptd" event={"ID":"ec8ee82b-813f-4680-ad0e-387bf877f97e","Type":"ContainerDied","Data":"bdc45a149a67bf40211a8f84fab182e4cd7fc216153dea2249b3f0a8b3f25470"} Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.124327 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc45a149a67bf40211a8f84fab182e4cd7fc216153dea2249b3f0a8b3f25470" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.180995 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.238975 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.320363 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dk5f\" (UniqueName: \"kubernetes.io/projected/ec8ee82b-813f-4680-ad0e-387bf877f97e-kube-api-access-5dk5f\") pod \"ec8ee82b-813f-4680-ad0e-387bf877f97e\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.320410 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-utilities\") pod \"ec8ee82b-813f-4680-ad0e-387bf877f97e\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.320576 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-catalog-content\") pod \"ec8ee82b-813f-4680-ad0e-387bf877f97e\" (UID: \"ec8ee82b-813f-4680-ad0e-387bf877f97e\") " Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.321581 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-utilities" (OuterVolumeSpecName: "utilities") pod "ec8ee82b-813f-4680-ad0e-387bf877f97e" (UID: "ec8ee82b-813f-4680-ad0e-387bf877f97e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.331202 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8ee82b-813f-4680-ad0e-387bf877f97e-kube-api-access-5dk5f" (OuterVolumeSpecName: "kube-api-access-5dk5f") pod "ec8ee82b-813f-4680-ad0e-387bf877f97e" (UID: "ec8ee82b-813f-4680-ad0e-387bf877f97e"). InnerVolumeSpecName "kube-api-access-5dk5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.366385 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec8ee82b-813f-4680-ad0e-387bf877f97e" (UID: "ec8ee82b-813f-4680-ad0e-387bf877f97e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.412520 4787 scope.go:117] "RemoveContainer" containerID="f4de49a172364230b49d28ed091919721c6a711613d49d8c091b70003f6b8ca7" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.423718 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.423754 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dk5f\" (UniqueName: \"kubernetes.io/projected/ec8ee82b-813f-4680-ad0e-387bf877f97e-kube-api-access-5dk5f\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.423770 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec8ee82b-813f-4680-ad0e-387bf877f97e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.447811 4787 scope.go:117] "RemoveContainer" containerID="20c97dc13e4384cac8d2f59d9eb73ffef478b18a6f46357f3d0fa07ea5ac91bf" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.509978 4787 scope.go:117] "RemoveContainer" containerID="1515a021c019786f4ca25f4e5d048e65a564e127bdfc4d4fc639420829b29b60" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.621454 4787 scope.go:117] "RemoveContainer" containerID="c47491b48414203e976c1ffde0ba8604b69d58319bfe947894771c5d7ed7d9c0" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.644424 4787 scope.go:117] "RemoveContainer" containerID="04f7a93e31587a111c58dedb13c8ae0228b86726df38b6eb289c0638b92d5c75" Feb 19 19:53:07 crc kubenswrapper[4787]: I0219 19:53:07.672024 4787 scope.go:117] "RemoveContainer" containerID="25e6bb95e5ac41fe15e6482772a41f1aa9f471e7b4c2829e2e6ac03d65963d9e" Feb 19 19:53:08 crc kubenswrapper[4787]: I0219 19:53:08.134024 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75ptd" Feb 19 19:53:08 crc kubenswrapper[4787]: I0219 19:53:08.186254 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-75ptd"] Feb 19 19:53:08 crc kubenswrapper[4787]: I0219 19:53:08.203707 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-75ptd"] Feb 19 19:53:08 crc kubenswrapper[4787]: I0219 19:53:08.903508 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" path="/var/lib/kubelet/pods/ec8ee82b-813f-4680-ad0e-387bf877f97e/volumes" Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.152649 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc294"] Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.153151 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nc294" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="registry-server" containerID="cri-o://2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd" gracePeriod=2 Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.781760 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.887955 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-utilities\") pod \"5614031e-fc3a-4d6c-afc0-7268364d46b5\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.888279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn4fw\" (UniqueName: \"kubernetes.io/projected/5614031e-fc3a-4d6c-afc0-7268364d46b5-kube-api-access-pn4fw\") pod \"5614031e-fc3a-4d6c-afc0-7268364d46b5\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.888361 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-catalog-content\") pod \"5614031e-fc3a-4d6c-afc0-7268364d46b5\" (UID: \"5614031e-fc3a-4d6c-afc0-7268364d46b5\") " Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.889056 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-utilities" (OuterVolumeSpecName: "utilities") pod "5614031e-fc3a-4d6c-afc0-7268364d46b5" (UID: "5614031e-fc3a-4d6c-afc0-7268364d46b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.889211 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.895320 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5614031e-fc3a-4d6c-afc0-7268364d46b5-kube-api-access-pn4fw" (OuterVolumeSpecName: "kube-api-access-pn4fw") pod "5614031e-fc3a-4d6c-afc0-7268364d46b5" (UID: "5614031e-fc3a-4d6c-afc0-7268364d46b5"). InnerVolumeSpecName "kube-api-access-pn4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:09 crc kubenswrapper[4787]: I0219 19:53:09.992236 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn4fw\" (UniqueName: \"kubernetes.io/projected/5614031e-fc3a-4d6c-afc0-7268364d46b5-kube-api-access-pn4fw\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.043087 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5614031e-fc3a-4d6c-afc0-7268364d46b5" (UID: "5614031e-fc3a-4d6c-afc0-7268364d46b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.093487 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5614031e-fc3a-4d6c-afc0-7268364d46b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.159239 4787 generic.go:334] "Generic (PLEG): container finished" podID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerID="2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd" exitCode=0 Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.159280 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerDied","Data":"2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd"} Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.159306 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc294" event={"ID":"5614031e-fc3a-4d6c-afc0-7268364d46b5","Type":"ContainerDied","Data":"b58dc37d03caacf2c9f3b73e2c2c3438f79175a0c0bf00bae9b72b1445be555a"} Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.159320 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc294" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.159336 4787 scope.go:117] "RemoveContainer" containerID="2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.186771 4787 scope.go:117] "RemoveContainer" containerID="8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.218076 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc294"] Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.238256 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nc294"] Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.248798 4787 scope.go:117] "RemoveContainer" containerID="b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.277059 4787 scope.go:117] "RemoveContainer" containerID="2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd" Feb 19 19:53:10 crc kubenswrapper[4787]: E0219 19:53:10.277525 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd\": container with ID starting with 2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd not found: ID does not exist" containerID="2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.277575 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd"} err="failed to get container status \"2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd\": rpc error: code = NotFound desc = could not find container \"2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd\": container with ID starting with 2a29fba6601294708f2e2485060d0e46f0cad1ddbbdedeb1ec1bd31d6e76d8fd not found: ID does not exist" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.277673 4787 scope.go:117] "RemoveContainer" containerID="8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca" Feb 19 19:53:10 crc kubenswrapper[4787]: E0219 19:53:10.278042 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca\": container with ID starting with 8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca not found: ID does not exist" containerID="8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.278092 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca"} err="failed to get container status \"8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca\": rpc error: code = NotFound desc = could not find container \"8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca\": container with ID starting with 8a4929c45cd76f5d9329411bcedea68a0460ff6626e81aee78057f09f6189bca not found: ID does not exist" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.278120 4787 scope.go:117] "RemoveContainer" containerID="b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3" Feb 19 19:53:10 crc kubenswrapper[4787]: E0219 19:53:10.278555 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3\": container with ID starting with b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3 not found: ID does not exist" containerID="b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.278599 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3"} err="failed to get container status \"b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3\": rpc error: code = NotFound desc = could not find container \"b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3\": container with ID starting with b90ee32f086751925789b979723df85313c989c6bcc4d28d7a2bd325b31c00f3 not found: ID does not exist" Feb 19 19:53:10 crc kubenswrapper[4787]: I0219 19:53:10.917902 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" path="/var/lib/kubelet/pods/5614031e-fc3a-4d6c-afc0-7268364d46b5/volumes" Feb 19 19:53:16 crc kubenswrapper[4787]: I0219 19:53:16.048691 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rz4fd"] Feb 19 19:53:16 crc kubenswrapper[4787]: I0219 19:53:16.062004 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rz4fd"] Feb 19 19:53:16 crc kubenswrapper[4787]: I0219 19:53:16.906321 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18009b64-0e4a-438d-9a5e-7619312865aa" path="/var/lib/kubelet/pods/18009b64-0e4a-438d-9a5e-7619312865aa/volumes" Feb 19 19:53:42 crc kubenswrapper[4787]: I0219 19:53:42.050460 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn2fb"] Feb 19 19:53:42 crc kubenswrapper[4787]: I0219 19:53:42.062197 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2hq2k"] Feb 19 19:53:42 crc kubenswrapper[4787]: I0219 19:53:42.071660 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kn2fb"] Feb 19 19:53:42 crc kubenswrapper[4787]: I0219 19:53:42.080688 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2hq2k"] Feb 19 19:53:42 crc kubenswrapper[4787]: I0219 19:53:42.906909 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d414ea7-bf01-41ad-9d7e-ed31676ae9e0" path="/var/lib/kubelet/pods/0d414ea7-bf01-41ad-9d7e-ed31676ae9e0/volumes" Feb 19 19:53:42 crc kubenswrapper[4787]: I0219 19:53:42.908079 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a96d0657-656e-4614-bac3-490e595478dd" path="/var/lib/kubelet/pods/a96d0657-656e-4614-bac3-490e595478dd/volumes" Feb 19 19:53:49 crc kubenswrapper[4787]: I0219 19:53:49.031943 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-vmllq"] Feb 19 19:53:49 crc kubenswrapper[4787]: I0219 19:53:49.046108 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-vmllq"] Feb 19 19:53:49 crc kubenswrapper[4787]: I0219 19:53:49.058704 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-7101-account-create-update-hn7v5"] Feb 19 19:53:49 crc kubenswrapper[4787]: I0219 19:53:49.070864 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-7101-account-create-update-hn7v5"] Feb 19 19:53:50 crc kubenswrapper[4787]: I0219 19:53:50.615714 4787 generic.go:334] "Generic (PLEG): container finished" podID="718c9653-d673-4f8b-bf7f-43d983bd9854" containerID="377be7c0f1900e633ab0d269866be8ac6f65e8458dd095b2e2f94f75d9b8bb1d" exitCode=0 Feb 19 19:53:50 crc kubenswrapper[4787]: I0219 19:53:50.615807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" event={"ID":"718c9653-d673-4f8b-bf7f-43d983bd9854","Type":"ContainerDied","Data":"377be7c0f1900e633ab0d269866be8ac6f65e8458dd095b2e2f94f75d9b8bb1d"} Feb 19 19:53:50 crc kubenswrapper[4787]: I0219 19:53:50.905799 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bd62de-d209-432d-acdc-af4d1d3d7392" path="/var/lib/kubelet/pods/54bd62de-d209-432d-acdc-af4d1d3d7392/volumes" Feb 19 19:53:50 crc kubenswrapper[4787]: I0219 19:53:50.906583 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d3b03a5-b16c-4c89-8196-3d26a077661a" path="/var/lib/kubelet/pods/5d3b03a5-b16c-4c89-8196-3d26a077661a/volumes" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.269819 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.402931 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-inventory\") pod \"718c9653-d673-4f8b-bf7f-43d983bd9854\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.403092 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-ssh-key-openstack-edpm-ipam\") pod \"718c9653-d673-4f8b-bf7f-43d983bd9854\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.403176 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pblk\" (UniqueName: \"kubernetes.io/projected/718c9653-d673-4f8b-bf7f-43d983bd9854-kube-api-access-9pblk\") pod \"718c9653-d673-4f8b-bf7f-43d983bd9854\" (UID: \"718c9653-d673-4f8b-bf7f-43d983bd9854\") " Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.409906 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718c9653-d673-4f8b-bf7f-43d983bd9854-kube-api-access-9pblk" (OuterVolumeSpecName: "kube-api-access-9pblk") pod "718c9653-d673-4f8b-bf7f-43d983bd9854" (UID: "718c9653-d673-4f8b-bf7f-43d983bd9854"). InnerVolumeSpecName "kube-api-access-9pblk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.450767 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-inventory" (OuterVolumeSpecName: "inventory") pod "718c9653-d673-4f8b-bf7f-43d983bd9854" (UID: "718c9653-d673-4f8b-bf7f-43d983bd9854"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.454867 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "718c9653-d673-4f8b-bf7f-43d983bd9854" (UID: "718c9653-d673-4f8b-bf7f-43d983bd9854"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.505985 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pblk\" (UniqueName: \"kubernetes.io/projected/718c9653-d673-4f8b-bf7f-43d983bd9854-kube-api-access-9pblk\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.506028 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.506040 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/718c9653-d673-4f8b-bf7f-43d983bd9854-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.658596 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" event={"ID":"718c9653-d673-4f8b-bf7f-43d983bd9854","Type":"ContainerDied","Data":"6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2"} Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.658920 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.659046 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.772470 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk"] Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773077 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="extract-utilities" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773104 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="extract-utilities" Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773130 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="registry-server" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773139 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="registry-server" Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773167 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718c9653-d673-4f8b-bf7f-43d983bd9854" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773177 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="718c9653-d673-4f8b-bf7f-43d983bd9854" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773199 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="extract-content" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773206 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="extract-content" Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773249 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="extract-content" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773257 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="extract-content" Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773268 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="extract-utilities" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773289 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="extract-utilities" Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.773301 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="registry-server" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773308 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="registry-server" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773560 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5614031e-fc3a-4d6c-afc0-7268364d46b5" containerName="registry-server" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773628 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="718c9653-d673-4f8b-bf7f-43d983bd9854" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.773645 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8ee82b-813f-4680-ad0e-387bf877f97e" containerName="registry-server" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.774725 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.777875 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.778102 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.778255 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.779227 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.792207 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk"] Feb 19 19:53:52 crc kubenswrapper[4787]: E0219 19:53:52.812480 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718c9653_d673_4f8b_bf7f_43d983bd9854.slice/crio-6caf45abe5c5196d5184368b0ace1cafd13f82be494db553f53a7d3eaca515c2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718c9653_d673_4f8b_bf7f_43d983bd9854.slice\": RecentStats: unable to find data in memory cache]" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.814405 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.814572 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnvjx\" (UniqueName: \"kubernetes.io/projected/3b90f36e-f50d-4430-8596-b157390ed5c7-kube-api-access-nnvjx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.814735 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.917165 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnvjx\" (UniqueName: \"kubernetes.io/projected/3b90f36e-f50d-4430-8596-b157390ed5c7-kube-api-access-nnvjx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.917236 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.917424 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.920405 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.920499 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.932634 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.933602 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:52 crc kubenswrapper[4787]: I0219 19:53:52.934082 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnvjx\" (UniqueName: \"kubernetes.io/projected/3b90f36e-f50d-4430-8596-b157390ed5c7-kube-api-access-nnvjx\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:53 crc kubenswrapper[4787]: I0219 19:53:53.101731 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:53:53 crc kubenswrapper[4787]: I0219 19:53:53.110411 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:53:53 crc kubenswrapper[4787]: I0219 19:53:53.590761 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk"] Feb 19 19:53:53 crc kubenswrapper[4787]: I0219 19:53:53.668892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" event={"ID":"3b90f36e-f50d-4430-8596-b157390ed5c7","Type":"ContainerStarted","Data":"a52377d27d1e8c07df879943f4dacd817cd42729bff912a79c70bf6896f8a094"} Feb 19 19:53:53 crc kubenswrapper[4787]: I0219 19:53:53.999778 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:53:54 crc kubenswrapper[4787]: I0219 19:53:54.684314 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" event={"ID":"3b90f36e-f50d-4430-8596-b157390ed5c7","Type":"ContainerStarted","Data":"5000f48b15fa5e1597395ed1aeb2c8a4814d407f9107bb98556c1c156e4bc5ef"} Feb 19 19:53:54 crc kubenswrapper[4787]: I0219 19:53:54.706419 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" podStartSLOduration=2.302941054 podStartE2EDuration="2.706394001s" podCreationTimestamp="2026-02-19 19:53:52 +0000 UTC" firstStartedPulling="2026-02-19 19:53:53.593784872 +0000 UTC m=+2101.384450814" lastFinishedPulling="2026-02-19 19:53:53.997237819 +0000 UTC m=+2101.787903761" observedRunningTime="2026-02-19 19:53:54.697598921 +0000 UTC m=+2102.488264863" watchObservedRunningTime="2026-02-19 19:53:54.706394001 +0000 UTC m=+2102.497059943" Feb 19 19:53:59 crc kubenswrapper[4787]: I0219 19:53:59.744376 4787 generic.go:334] "Generic (PLEG): container finished" podID="3b90f36e-f50d-4430-8596-b157390ed5c7" containerID="5000f48b15fa5e1597395ed1aeb2c8a4814d407f9107bb98556c1c156e4bc5ef" exitCode=0 Feb 19 19:53:59 crc kubenswrapper[4787]: I0219 19:53:59.744443 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" event={"ID":"3b90f36e-f50d-4430-8596-b157390ed5c7","Type":"ContainerDied","Data":"5000f48b15fa5e1597395ed1aeb2c8a4814d407f9107bb98556c1c156e4bc5ef"} Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.334877 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.425422 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnvjx\" (UniqueName: \"kubernetes.io/projected/3b90f36e-f50d-4430-8596-b157390ed5c7-kube-api-access-nnvjx\") pod \"3b90f36e-f50d-4430-8596-b157390ed5c7\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.425709 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-ssh-key-openstack-edpm-ipam\") pod \"3b90f36e-f50d-4430-8596-b157390ed5c7\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.425741 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-inventory\") pod \"3b90f36e-f50d-4430-8596-b157390ed5c7\" (UID: \"3b90f36e-f50d-4430-8596-b157390ed5c7\") " Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.432593 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b90f36e-f50d-4430-8596-b157390ed5c7-kube-api-access-nnvjx" (OuterVolumeSpecName: "kube-api-access-nnvjx") pod "3b90f36e-f50d-4430-8596-b157390ed5c7" (UID: "3b90f36e-f50d-4430-8596-b157390ed5c7"). InnerVolumeSpecName "kube-api-access-nnvjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.508796 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b90f36e-f50d-4430-8596-b157390ed5c7" (UID: "3b90f36e-f50d-4430-8596-b157390ed5c7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.522790 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-inventory" (OuterVolumeSpecName: "inventory") pod "3b90f36e-f50d-4430-8596-b157390ed5c7" (UID: "3b90f36e-f50d-4430-8596-b157390ed5c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.532371 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnvjx\" (UniqueName: \"kubernetes.io/projected/3b90f36e-f50d-4430-8596-b157390ed5c7-kube-api-access-nnvjx\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.532410 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.532421 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b90f36e-f50d-4430-8596-b157390ed5c7-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.768010 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" event={"ID":"3b90f36e-f50d-4430-8596-b157390ed5c7","Type":"ContainerDied","Data":"a52377d27d1e8c07df879943f4dacd817cd42729bff912a79c70bf6896f8a094"} Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.768227 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a52377d27d1e8c07df879943f4dacd817cd42729bff912a79c70bf6896f8a094" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.768045 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.836708 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh"] Feb 19 19:54:01 crc kubenswrapper[4787]: E0219 19:54:01.837172 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b90f36e-f50d-4430-8596-b157390ed5c7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.837191 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b90f36e-f50d-4430-8596-b157390ed5c7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.837468 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b90f36e-f50d-4430-8596-b157390ed5c7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.838330 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.843260 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.843409 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.843568 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.845739 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.864054 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh"] Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.941806 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.942449 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:01 crc kubenswrapper[4787]: I0219 19:54:01.942492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvtx\" (UniqueName: \"kubernetes.io/projected/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-kube-api-access-qfvtx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.045105 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.045182 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvtx\" (UniqueName: \"kubernetes.io/projected/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-kube-api-access-qfvtx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.045293 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.050245 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.051169 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.063731 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvtx\" (UniqueName: \"kubernetes.io/projected/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-kube-api-access-qfvtx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-r2hlh\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.161752 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.695480 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh"] Feb 19 19:54:02 crc kubenswrapper[4787]: I0219 19:54:02.790506 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" event={"ID":"5afed044-bccc-4d1f-9b24-ffbc4ebecb65","Type":"ContainerStarted","Data":"e0e7a700b0d30717b0df7e2eb35fb09f1704796ab63c6644b6c90b413b360c59"} Feb 19 19:54:03 crc kubenswrapper[4787]: I0219 19:54:03.800953 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" event={"ID":"5afed044-bccc-4d1f-9b24-ffbc4ebecb65","Type":"ContainerStarted","Data":"ab955de653dbaef8bc4b78103b9a45215df792e47d44eb43387f1f89eb748ef3"} Feb 19 19:54:03 crc kubenswrapper[4787]: I0219 19:54:03.822283 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" podStartSLOduration=2.395185976 podStartE2EDuration="2.822260163s" podCreationTimestamp="2026-02-19 19:54:01 +0000 UTC" firstStartedPulling="2026-02-19 19:54:02.70632646 +0000 UTC m=+2110.496992402" lastFinishedPulling="2026-02-19 19:54:03.133400647 +0000 UTC m=+2110.924066589" observedRunningTime="2026-02-19 19:54:03.814841272 +0000 UTC m=+2111.605507224" watchObservedRunningTime="2026-02-19 19:54:03.822260163 +0000 UTC m=+2111.612926095" Feb 19 19:54:07 crc kubenswrapper[4787]: I0219 19:54:07.955626 4787 scope.go:117] "RemoveContainer" containerID="26b99dd13fb552e884facb81083ff2266f2962a36b1f79e5ae8d6ade0af05be2" Feb 19 19:54:07 crc kubenswrapper[4787]: I0219 19:54:07.985211 4787 scope.go:117] "RemoveContainer" containerID="6a7578e70d4167684c52b75db29a2c06804ea8f9400529f5d880add104298db6" Feb 19 19:54:08 crc kubenswrapper[4787]: I0219 19:54:08.046957 4787 scope.go:117] "RemoveContainer" containerID="63e9085b9f6f51a0a6f2bbc5ae47da7024e774b61a692b2ea62658123466dbb7" Feb 19 19:54:08 crc kubenswrapper[4787]: I0219 19:54:08.098734 4787 scope.go:117] "RemoveContainer" containerID="4ae175dcb72e5ee3baa1779c935e40a059c278217348c663f599cfb1e58eff37" Feb 19 19:54:08 crc kubenswrapper[4787]: I0219 19:54:08.154592 4787 scope.go:117] "RemoveContainer" containerID="6f15087f35d3cba1ad843fbf5e9b04ff128b0e2b507153ee1fb1b0c2ecdabc2c" Feb 19 19:54:09 crc kubenswrapper[4787]: I0219 19:54:09.263427 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:54:09 crc kubenswrapper[4787]: I0219 19:54:09.263490 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:54:26 crc kubenswrapper[4787]: I0219 19:54:26.043787 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sbjcl"] Feb 19 19:54:26 crc kubenswrapper[4787]: I0219 19:54:26.053431 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sbjcl"] Feb 19 19:54:26 crc kubenswrapper[4787]: I0219 19:54:26.906210 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab8eaee-1764-424d-bbdf-d94f96dc6aa6" path="/var/lib/kubelet/pods/7ab8eaee-1764-424d-bbdf-d94f96dc6aa6/volumes" Feb 19 19:54:38 crc kubenswrapper[4787]: I0219 19:54:38.161747 4787 generic.go:334] "Generic (PLEG): container finished" podID="5afed044-bccc-4d1f-9b24-ffbc4ebecb65" containerID="ab955de653dbaef8bc4b78103b9a45215df792e47d44eb43387f1f89eb748ef3" exitCode=0 Feb 19 19:54:38 crc kubenswrapper[4787]: I0219 19:54:38.161845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" event={"ID":"5afed044-bccc-4d1f-9b24-ffbc4ebecb65","Type":"ContainerDied","Data":"ab955de653dbaef8bc4b78103b9a45215df792e47d44eb43387f1f89eb748ef3"} Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.263432 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.263856 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.728441 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.778068 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfvtx\" (UniqueName: \"kubernetes.io/projected/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-kube-api-access-qfvtx\") pod \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.778198 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-ssh-key-openstack-edpm-ipam\") pod \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.778404 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-inventory\") pod \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\" (UID: \"5afed044-bccc-4d1f-9b24-ffbc4ebecb65\") " Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.786826 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-kube-api-access-qfvtx" (OuterVolumeSpecName: "kube-api-access-qfvtx") pod "5afed044-bccc-4d1f-9b24-ffbc4ebecb65" (UID: "5afed044-bccc-4d1f-9b24-ffbc4ebecb65"). InnerVolumeSpecName "kube-api-access-qfvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.826032 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-inventory" (OuterVolumeSpecName: "inventory") pod "5afed044-bccc-4d1f-9b24-ffbc4ebecb65" (UID: "5afed044-bccc-4d1f-9b24-ffbc4ebecb65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.831493 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5afed044-bccc-4d1f-9b24-ffbc4ebecb65" (UID: "5afed044-bccc-4d1f-9b24-ffbc4ebecb65"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.882172 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.882212 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfvtx\" (UniqueName: \"kubernetes.io/projected/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-kube-api-access-qfvtx\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:39 crc kubenswrapper[4787]: I0219 19:54:39.882224 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5afed044-bccc-4d1f-9b24-ffbc4ebecb65-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.186219 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" event={"ID":"5afed044-bccc-4d1f-9b24-ffbc4ebecb65","Type":"ContainerDied","Data":"e0e7a700b0d30717b0df7e2eb35fb09f1704796ab63c6644b6c90b413b360c59"} Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.186284 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e7a700b0d30717b0df7e2eb35fb09f1704796ab63c6644b6c90b413b360c59" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.186366 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-r2hlh" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.367227 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb"] Feb 19 19:54:40 crc kubenswrapper[4787]: E0219 19:54:40.368099 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afed044-bccc-4d1f-9b24-ffbc4ebecb65" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.368115 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afed044-bccc-4d1f-9b24-ffbc4ebecb65" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.368369 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afed044-bccc-4d1f-9b24-ffbc4ebecb65" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.369401 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.373314 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.373542 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.373638 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.374900 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.377799 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb"] Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.406747 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.406870 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.406966 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvv7\" (UniqueName: \"kubernetes.io/projected/c51c1e25-0f98-4bc4-ad23-c5123e535d97-kube-api-access-qmvv7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.509667 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.509803 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.509946 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvv7\" (UniqueName: \"kubernetes.io/projected/c51c1e25-0f98-4bc4-ad23-c5123e535d97-kube-api-access-qmvv7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.513758 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.523938 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.543368 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvv7\" (UniqueName: \"kubernetes.io/projected/c51c1e25-0f98-4bc4-ad23-c5123e535d97-kube-api-access-qmvv7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-74klb\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:40 crc kubenswrapper[4787]: I0219 19:54:40.696817 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:54:41 crc kubenswrapper[4787]: I0219 19:54:41.272571 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb"] Feb 19 19:54:42 crc kubenswrapper[4787]: I0219 19:54:42.206065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" event={"ID":"c51c1e25-0f98-4bc4-ad23-c5123e535d97","Type":"ContainerStarted","Data":"8896289be1c0f6a8631204b9ce037f4fd9c6ead7841f3b9d65fc751f54944621"} Feb 19 19:54:42 crc kubenswrapper[4787]: I0219 19:54:42.206648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" event={"ID":"c51c1e25-0f98-4bc4-ad23-c5123e535d97","Type":"ContainerStarted","Data":"14fa825d275d3c13adac1413443f020f8f9effef9447dc87101a94ce7a61eb12"} Feb 19 19:54:42 crc kubenswrapper[4787]: I0219 19:54:42.229320 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" podStartSLOduration=1.801859391 podStartE2EDuration="2.229297849s" podCreationTimestamp="2026-02-19 19:54:40 +0000 UTC" firstStartedPulling="2026-02-19 19:54:41.283863283 +0000 UTC m=+2149.074529235" lastFinishedPulling="2026-02-19 19:54:41.711301741 +0000 UTC m=+2149.501967693" observedRunningTime="2026-02-19 19:54:42.223044251 +0000 UTC m=+2150.013710193" watchObservedRunningTime="2026-02-19 19:54:42.229297849 +0000 UTC m=+2150.019963791" Feb 19 19:55:08 crc kubenswrapper[4787]: I0219 19:55:08.343658 4787 scope.go:117] "RemoveContainer" containerID="3cb2b2027e7bb4e899aeedcb7aaa2661e6979fdcfe37d211def7337208fe1320" Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.263858 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.264216 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.264264 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.265241 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.265303 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" gracePeriod=600 Feb 19 19:55:09 crc kubenswrapper[4787]: E0219 19:55:09.394114 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.509079 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" exitCode=0 Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.509141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a"} Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.509461 4787 scope.go:117] "RemoveContainer" containerID="b28d37fcd770cc8ddbfa968a92e9da36d6c302db5f51061d56875fb7273ab058" Feb 19 19:55:09 crc kubenswrapper[4787]: I0219 19:55:09.510182 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:55:09 crc kubenswrapper[4787]: E0219 19:55:09.510488 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:55:23 crc kubenswrapper[4787]: I0219 19:55:23.893074 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:55:23 crc kubenswrapper[4787]: E0219 19:55:23.893887 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:55:24 crc kubenswrapper[4787]: I0219 19:55:24.677882 4787 generic.go:334] "Generic (PLEG): container finished" podID="c51c1e25-0f98-4bc4-ad23-c5123e535d97" containerID="8896289be1c0f6a8631204b9ce037f4fd9c6ead7841f3b9d65fc751f54944621" exitCode=0 Feb 19 19:55:24 crc kubenswrapper[4787]: I0219 19:55:24.677983 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" event={"ID":"c51c1e25-0f98-4bc4-ad23-c5123e535d97","Type":"ContainerDied","Data":"8896289be1c0f6a8631204b9ce037f4fd9c6ead7841f3b9d65fc751f54944621"} Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.153849 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.251132 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmvv7\" (UniqueName: \"kubernetes.io/projected/c51c1e25-0f98-4bc4-ad23-c5123e535d97-kube-api-access-qmvv7\") pod \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.251193 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-ssh-key-openstack-edpm-ipam\") pod \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.251472 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-inventory\") pod \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\" (UID: \"c51c1e25-0f98-4bc4-ad23-c5123e535d97\") " Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.256589 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51c1e25-0f98-4bc4-ad23-c5123e535d97-kube-api-access-qmvv7" (OuterVolumeSpecName: "kube-api-access-qmvv7") pod "c51c1e25-0f98-4bc4-ad23-c5123e535d97" (UID: "c51c1e25-0f98-4bc4-ad23-c5123e535d97"). InnerVolumeSpecName "kube-api-access-qmvv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.281123 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c51c1e25-0f98-4bc4-ad23-c5123e535d97" (UID: "c51c1e25-0f98-4bc4-ad23-c5123e535d97"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.284592 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-inventory" (OuterVolumeSpecName: "inventory") pod "c51c1e25-0f98-4bc4-ad23-c5123e535d97" (UID: "c51c1e25-0f98-4bc4-ad23-c5123e535d97"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.354766 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.354803 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmvv7\" (UniqueName: \"kubernetes.io/projected/c51c1e25-0f98-4bc4-ad23-c5123e535d97-kube-api-access-qmvv7\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.354814 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51c1e25-0f98-4bc4-ad23-c5123e535d97-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.700451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" event={"ID":"c51c1e25-0f98-4bc4-ad23-c5123e535d97","Type":"ContainerDied","Data":"14fa825d275d3c13adac1413443f020f8f9effef9447dc87101a94ce7a61eb12"} Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.700857 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14fa825d275d3c13adac1413443f020f8f9effef9447dc87101a94ce7a61eb12" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.700543 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-74klb" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.798089 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhp6n"] Feb 19 19:55:26 crc kubenswrapper[4787]: E0219 19:55:26.798727 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51c1e25-0f98-4bc4-ad23-c5123e535d97" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.798750 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51c1e25-0f98-4bc4-ad23-c5123e535d97" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.799028 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51c1e25-0f98-4bc4-ad23-c5123e535d97" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.800205 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.802745 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.803098 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.803161 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.803360 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.812385 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhp6n"] Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.867517 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cl6p\" (UniqueName: \"kubernetes.io/projected/9b584219-d3d0-490f-bba5-dff958e63e6a-kube-api-access-6cl6p\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.867705 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.867852 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.970194 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.971710 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cl6p\" (UniqueName: \"kubernetes.io/projected/9b584219-d3d0-490f-bba5-dff958e63e6a-kube-api-access-6cl6p\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.972054 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.977132 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.984567 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:26 crc kubenswrapper[4787]: I0219 19:55:26.991417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cl6p\" (UniqueName: \"kubernetes.io/projected/9b584219-d3d0-490f-bba5-dff958e63e6a-kube-api-access-6cl6p\") pod \"ssh-known-hosts-edpm-deployment-vhp6n\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:27 crc kubenswrapper[4787]: I0219 19:55:27.134139 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:27 crc kubenswrapper[4787]: I0219 19:55:27.649755 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-vhp6n"] Feb 19 19:55:27 crc kubenswrapper[4787]: I0219 19:55:27.710402 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" event={"ID":"9b584219-d3d0-490f-bba5-dff958e63e6a","Type":"ContainerStarted","Data":"462d27b05835ad8d15a6d2ea8e8c4466409106251e77eda820bbde5f4028be10"} Feb 19 19:55:28 crc kubenswrapper[4787]: I0219 19:55:28.721506 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" event={"ID":"9b584219-d3d0-490f-bba5-dff958e63e6a","Type":"ContainerStarted","Data":"d31a797ad9b712b400545656ae37540b2cf11ab5b1ff9cf6e5ee7a53d7688d17"} Feb 19 19:55:34 crc kubenswrapper[4787]: I0219 19:55:34.803070 4787 generic.go:334] "Generic (PLEG): container finished" podID="9b584219-d3d0-490f-bba5-dff958e63e6a" containerID="d31a797ad9b712b400545656ae37540b2cf11ab5b1ff9cf6e5ee7a53d7688d17" exitCode=0 Feb 19 19:55:34 crc kubenswrapper[4787]: I0219 19:55:34.803132 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" event={"ID":"9b584219-d3d0-490f-bba5-dff958e63e6a","Type":"ContainerDied","Data":"d31a797ad9b712b400545656ae37540b2cf11ab5b1ff9cf6e5ee7a53d7688d17"} Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.283466 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.423279 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-ssh-key-openstack-edpm-ipam\") pod \"9b584219-d3d0-490f-bba5-dff958e63e6a\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.423866 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-inventory-0\") pod \"9b584219-d3d0-490f-bba5-dff958e63e6a\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.424818 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cl6p\" (UniqueName: \"kubernetes.io/projected/9b584219-d3d0-490f-bba5-dff958e63e6a-kube-api-access-6cl6p\") pod \"9b584219-d3d0-490f-bba5-dff958e63e6a\" (UID: \"9b584219-d3d0-490f-bba5-dff958e63e6a\") " Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.431099 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b584219-d3d0-490f-bba5-dff958e63e6a-kube-api-access-6cl6p" (OuterVolumeSpecName: "kube-api-access-6cl6p") pod "9b584219-d3d0-490f-bba5-dff958e63e6a" (UID: "9b584219-d3d0-490f-bba5-dff958e63e6a"). InnerVolumeSpecName "kube-api-access-6cl6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.458385 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9b584219-d3d0-490f-bba5-dff958e63e6a" (UID: "9b584219-d3d0-490f-bba5-dff958e63e6a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.458403 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b584219-d3d0-490f-bba5-dff958e63e6a" (UID: "9b584219-d3d0-490f-bba5-dff958e63e6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.531907 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.531964 4787 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b584219-d3d0-490f-bba5-dff958e63e6a-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.531978 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cl6p\" (UniqueName: \"kubernetes.io/projected/9b584219-d3d0-490f-bba5-dff958e63e6a-kube-api-access-6cl6p\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.824324 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" event={"ID":"9b584219-d3d0-490f-bba5-dff958e63e6a","Type":"ContainerDied","Data":"462d27b05835ad8d15a6d2ea8e8c4466409106251e77eda820bbde5f4028be10"} Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.824373 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="462d27b05835ad8d15a6d2ea8e8c4466409106251e77eda820bbde5f4028be10" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.824382 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-vhp6n" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.910743 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx"] Feb 19 19:55:36 crc kubenswrapper[4787]: E0219 19:55:36.911280 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b584219-d3d0-490f-bba5-dff958e63e6a" containerName="ssh-known-hosts-edpm-deployment" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.911306 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b584219-d3d0-490f-bba5-dff958e63e6a" containerName="ssh-known-hosts-edpm-deployment" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.911540 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b584219-d3d0-490f-bba5-dff958e63e6a" containerName="ssh-known-hosts-edpm-deployment" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.912571 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.916465 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.917236 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.917320 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.917362 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:55:36 crc kubenswrapper[4787]: I0219 19:55:36.944526 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx"] Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.045928 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.047641 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.047812 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4g5\" (UniqueName: \"kubernetes.io/projected/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-kube-api-access-7r4g5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.152086 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.152213 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4g5\" (UniqueName: \"kubernetes.io/projected/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-kube-api-access-7r4g5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.152289 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.160808 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.161232 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.171842 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4g5\" (UniqueName: \"kubernetes.io/projected/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-kube-api-access-7r4g5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-srwrx\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.244751 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.763042 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx"] Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.763304 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.834739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" event={"ID":"688fc5c8-6b45-40de-9e80-8b90ccc5ca16","Type":"ContainerStarted","Data":"1d5b8be33600a1dbb4e3087818d28d1998e9cc72f744ffcd502966284ebdddf9"} Feb 19 19:55:37 crc kubenswrapper[4787]: I0219 19:55:37.892581 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:55:37 crc kubenswrapper[4787]: E0219 19:55:37.893121 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:55:38 crc kubenswrapper[4787]: I0219 19:55:38.844875 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" event={"ID":"688fc5c8-6b45-40de-9e80-8b90ccc5ca16","Type":"ContainerStarted","Data":"b80cb95f8502429b602e12dd35379f5acd132fb6ef598cf5d291a199911f7139"} Feb 19 19:55:38 crc kubenswrapper[4787]: I0219 19:55:38.862001 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" podStartSLOduration=2.441473323 podStartE2EDuration="2.861986755s" podCreationTimestamp="2026-02-19 19:55:36 +0000 UTC" firstStartedPulling="2026-02-19 19:55:37.763027133 +0000 UTC m=+2205.553693075" lastFinishedPulling="2026-02-19 19:55:38.183540565 +0000 UTC m=+2205.974206507" observedRunningTime="2026-02-19 19:55:38.861338227 +0000 UTC m=+2206.652004159" watchObservedRunningTime="2026-02-19 19:55:38.861986755 +0000 UTC m=+2206.652652697" Feb 19 19:55:45 crc kubenswrapper[4787]: I0219 19:55:45.915700 4787 generic.go:334] "Generic (PLEG): container finished" podID="688fc5c8-6b45-40de-9e80-8b90ccc5ca16" containerID="b80cb95f8502429b602e12dd35379f5acd132fb6ef598cf5d291a199911f7139" exitCode=0 Feb 19 19:55:45 crc kubenswrapper[4787]: I0219 19:55:45.915720 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" event={"ID":"688fc5c8-6b45-40de-9e80-8b90ccc5ca16","Type":"ContainerDied","Data":"b80cb95f8502429b602e12dd35379f5acd132fb6ef598cf5d291a199911f7139"} Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.456314 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.522485 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-inventory\") pod \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.523005 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-ssh-key-openstack-edpm-ipam\") pod \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.523124 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r4g5\" (UniqueName: \"kubernetes.io/projected/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-kube-api-access-7r4g5\") pod \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\" (UID: \"688fc5c8-6b45-40de-9e80-8b90ccc5ca16\") " Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.529569 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-kube-api-access-7r4g5" (OuterVolumeSpecName: "kube-api-access-7r4g5") pod "688fc5c8-6b45-40de-9e80-8b90ccc5ca16" (UID: "688fc5c8-6b45-40de-9e80-8b90ccc5ca16"). InnerVolumeSpecName "kube-api-access-7r4g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.560484 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "688fc5c8-6b45-40de-9e80-8b90ccc5ca16" (UID: "688fc5c8-6b45-40de-9e80-8b90ccc5ca16"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.565386 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-inventory" (OuterVolumeSpecName: "inventory") pod "688fc5c8-6b45-40de-9e80-8b90ccc5ca16" (UID: "688fc5c8-6b45-40de-9e80-8b90ccc5ca16"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.627272 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.627539 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r4g5\" (UniqueName: \"kubernetes.io/projected/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-kube-api-access-7r4g5\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.627655 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688fc5c8-6b45-40de-9e80-8b90ccc5ca16-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.979664 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" event={"ID":"688fc5c8-6b45-40de-9e80-8b90ccc5ca16","Type":"ContainerDied","Data":"1d5b8be33600a1dbb4e3087818d28d1998e9cc72f744ffcd502966284ebdddf9"} Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.979728 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d5b8be33600a1dbb4e3087818d28d1998e9cc72f744ffcd502966284ebdddf9" Feb 19 19:55:47 crc kubenswrapper[4787]: I0219 19:55:47.979796 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-srwrx" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.029641 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6"] Feb 19 19:55:48 crc kubenswrapper[4787]: E0219 19:55:48.030538 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688fc5c8-6b45-40de-9e80-8b90ccc5ca16" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.030562 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="688fc5c8-6b45-40de-9e80-8b90ccc5ca16" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.030882 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="688fc5c8-6b45-40de-9e80-8b90ccc5ca16" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.031863 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.036110 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.039137 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.039942 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.040083 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.043025 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6"] Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.141332 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.141387 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892lc\" (UniqueName: \"kubernetes.io/projected/0ba5d253-5278-4b6c-a071-cec1f3824dd4-kube-api-access-892lc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.141599 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.245738 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-892lc\" (UniqueName: \"kubernetes.io/projected/0ba5d253-5278-4b6c-a071-cec1f3824dd4-kube-api-access-892lc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.246800 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.247665 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.251354 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.269240 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.270655 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-892lc\" (UniqueName: \"kubernetes.io/projected/0ba5d253-5278-4b6c-a071-cec1f3824dd4-kube-api-access-892lc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.369543 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.920946 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6"] Feb 19 19:55:48 crc kubenswrapper[4787]: I0219 19:55:48.991207 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" event={"ID":"0ba5d253-5278-4b6c-a071-cec1f3824dd4","Type":"ContainerStarted","Data":"facdb54c659a5907ec33d3137bab4b6b212958e9c63b3748bb7d17f80303568c"} Feb 19 19:55:50 crc kubenswrapper[4787]: I0219 19:55:50.006146 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" event={"ID":"0ba5d253-5278-4b6c-a071-cec1f3824dd4","Type":"ContainerStarted","Data":"5fa2352548e1afff967c5a3759df83cd1cbb548b6d9d46d5f898dc3c65b1515e"} Feb 19 19:55:50 crc kubenswrapper[4787]: I0219 19:55:50.034453 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" podStartSLOduration=2.615240257 podStartE2EDuration="3.034432141s" podCreationTimestamp="2026-02-19 19:55:47 +0000 UTC" firstStartedPulling="2026-02-19 19:55:48.932234317 +0000 UTC m=+2216.722900259" lastFinishedPulling="2026-02-19 19:55:49.351426191 +0000 UTC m=+2217.142092143" observedRunningTime="2026-02-19 19:55:50.026868366 +0000 UTC m=+2217.817534298" watchObservedRunningTime="2026-02-19 19:55:50.034432141 +0000 UTC m=+2217.825098083" Feb 19 19:55:52 crc kubenswrapper[4787]: I0219 19:55:52.902346 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:55:52 crc kubenswrapper[4787]: E0219 19:55:52.903418 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.055890 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cjn"] Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.059210 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.072197 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cjn"] Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.204334 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-utilities\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.204409 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmqw\" (UniqueName: \"kubernetes.io/projected/a8097062-04c9-4232-8096-4310d846f199-kube-api-access-plmqw\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.204505 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-catalog-content\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.306367 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-utilities\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.306439 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plmqw\" (UniqueName: \"kubernetes.io/projected/a8097062-04c9-4232-8096-4310d846f199-kube-api-access-plmqw\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.306523 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-catalog-content\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.306942 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-utilities\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.306998 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-catalog-content\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.325974 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plmqw\" (UniqueName: \"kubernetes.io/projected/a8097062-04c9-4232-8096-4310d846f199-kube-api-access-plmqw\") pod \"redhat-marketplace-m9cjn\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.384950 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:55:53 crc kubenswrapper[4787]: I0219 19:55:53.861037 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cjn"] Feb 19 19:55:54 crc kubenswrapper[4787]: I0219 19:55:54.045394 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerStarted","Data":"5dbb3aa1bff06996f69831c6e24fc76c5c2e0e76d18d419a6bd0c6dbcef6067a"} Feb 19 19:55:55 crc kubenswrapper[4787]: I0219 19:55:55.057042 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8097062-04c9-4232-8096-4310d846f199" containerID="74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50" exitCode=0 Feb 19 19:55:55 crc kubenswrapper[4787]: I0219 19:55:55.057174 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerDied","Data":"74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50"} Feb 19 19:55:56 crc kubenswrapper[4787]: I0219 19:55:56.068322 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerStarted","Data":"a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d"} Feb 19 19:55:57 crc kubenswrapper[4787]: I0219 19:55:57.082409 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8097062-04c9-4232-8096-4310d846f199" containerID="a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d" exitCode=0 Feb 19 19:55:57 crc kubenswrapper[4787]: I0219 19:55:57.082463 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerDied","Data":"a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d"} Feb 19 19:55:58 crc kubenswrapper[4787]: I0219 19:55:58.092586 4787 generic.go:334] "Generic (PLEG): container finished" podID="0ba5d253-5278-4b6c-a071-cec1f3824dd4" containerID="5fa2352548e1afff967c5a3759df83cd1cbb548b6d9d46d5f898dc3c65b1515e" exitCode=0 Feb 19 19:55:58 crc kubenswrapper[4787]: I0219 19:55:58.092963 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" event={"ID":"0ba5d253-5278-4b6c-a071-cec1f3824dd4","Type":"ContainerDied","Data":"5fa2352548e1afff967c5a3759df83cd1cbb548b6d9d46d5f898dc3c65b1515e"} Feb 19 19:55:58 crc kubenswrapper[4787]: I0219 19:55:58.096939 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerStarted","Data":"b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae"} Feb 19 19:55:58 crc kubenswrapper[4787]: I0219 19:55:58.132740 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m9cjn" podStartSLOduration=2.701549896 podStartE2EDuration="5.132721699s" podCreationTimestamp="2026-02-19 19:55:53 +0000 UTC" firstStartedPulling="2026-02-19 19:55:55.060800077 +0000 UTC m=+2222.851466019" lastFinishedPulling="2026-02-19 19:55:57.49197187 +0000 UTC m=+2225.282637822" observedRunningTime="2026-02-19 19:55:58.130036793 +0000 UTC m=+2225.920702745" watchObservedRunningTime="2026-02-19 19:55:58.132721699 +0000 UTC m=+2225.923387641" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.595597 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.663034 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-ssh-key-openstack-edpm-ipam\") pod \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.663178 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-inventory\") pod \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.663298 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-892lc\" (UniqueName: \"kubernetes.io/projected/0ba5d253-5278-4b6c-a071-cec1f3824dd4-kube-api-access-892lc\") pod \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\" (UID: \"0ba5d253-5278-4b6c-a071-cec1f3824dd4\") " Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.668735 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba5d253-5278-4b6c-a071-cec1f3824dd4-kube-api-access-892lc" (OuterVolumeSpecName: "kube-api-access-892lc") pod "0ba5d253-5278-4b6c-a071-cec1f3824dd4" (UID: "0ba5d253-5278-4b6c-a071-cec1f3824dd4"). InnerVolumeSpecName "kube-api-access-892lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.702508 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0ba5d253-5278-4b6c-a071-cec1f3824dd4" (UID: "0ba5d253-5278-4b6c-a071-cec1f3824dd4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.705517 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-inventory" (OuterVolumeSpecName: "inventory") pod "0ba5d253-5278-4b6c-a071-cec1f3824dd4" (UID: "0ba5d253-5278-4b6c-a071-cec1f3824dd4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.766007 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.766043 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ba5d253-5278-4b6c-a071-cec1f3824dd4-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:55:59 crc kubenswrapper[4787]: I0219 19:55:59.766053 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-892lc\" (UniqueName: \"kubernetes.io/projected/0ba5d253-5278-4b6c-a071-cec1f3824dd4-kube-api-access-892lc\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.124798 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" event={"ID":"0ba5d253-5278-4b6c-a071-cec1f3824dd4","Type":"ContainerDied","Data":"facdb54c659a5907ec33d3137bab4b6b212958e9c63b3748bb7d17f80303568c"} Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.124846 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="facdb54c659a5907ec33d3137bab4b6b212958e9c63b3748bb7d17f80303568c" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.124950 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.302647 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2"] Feb 19 19:56:00 crc kubenswrapper[4787]: E0219 19:56:00.303244 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba5d253-5278-4b6c-a071-cec1f3824dd4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.303262 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba5d253-5278-4b6c-a071-cec1f3824dd4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.303570 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba5d253-5278-4b6c-a071-cec1f3824dd4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.304727 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.307221 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.310322 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.310588 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.310666 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.311089 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.311388 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.311438 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.311463 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.311499 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.320097 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2"] Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408140 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408429 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408455 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408627 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g6b7\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-kube-api-access-2g6b7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408681 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408712 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408807 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408863 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.408933 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409217 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409311 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409351 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409390 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409660 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.409718 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511476 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511545 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511576 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511669 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511741 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511806 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g6b7\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-kube-api-access-2g6b7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511855 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511879 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511925 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.511963 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.512018 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.512058 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.512096 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.512119 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.512141 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.518152 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.518400 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.518409 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.518591 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.518687 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.519010 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.519094 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.519219 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.519503 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.520403 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.520999 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.521342 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.522126 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.522710 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.523228 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.531009 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g6b7\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-kube-api-access-2g6b7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:00 crc kubenswrapper[4787]: I0219 19:56:00.650863 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:01 crc kubenswrapper[4787]: W0219 19:56:01.174200 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dbede29_862e_457a_a641_689836eab084.slice/crio-4a09f39a20cd5108837900048b184830466be80f71b1b4681e400ded7b6c5582 WatchSource:0}: Error finding container 4a09f39a20cd5108837900048b184830466be80f71b1b4681e400ded7b6c5582: Status 404 returned error can't find the container with id 4a09f39a20cd5108837900048b184830466be80f71b1b4681e400ded7b6c5582 Feb 19 19:56:01 crc kubenswrapper[4787]: I0219 19:56:01.180325 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2"] Feb 19 19:56:02 crc kubenswrapper[4787]: I0219 19:56:02.144178 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" event={"ID":"7dbede29-862e-457a-a641-689836eab084","Type":"ContainerStarted","Data":"ee56359ce706b97cfbd77c24971eda27ce454ae2068eb870fc8e872645d1746e"} Feb 19 19:56:02 crc kubenswrapper[4787]: I0219 19:56:02.145810 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" event={"ID":"7dbede29-862e-457a-a641-689836eab084","Type":"ContainerStarted","Data":"4a09f39a20cd5108837900048b184830466be80f71b1b4681e400ded7b6c5582"} Feb 19 19:56:02 crc kubenswrapper[4787]: I0219 19:56:02.176622 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" podStartSLOduration=1.644484872 podStartE2EDuration="2.17659033s" podCreationTimestamp="2026-02-19 19:56:00 +0000 UTC" firstStartedPulling="2026-02-19 19:56:01.177892022 +0000 UTC m=+2228.968557984" lastFinishedPulling="2026-02-19 19:56:01.7099975 +0000 UTC m=+2229.500663442" observedRunningTime="2026-02-19 19:56:02.166057631 +0000 UTC m=+2229.956723593" watchObservedRunningTime="2026-02-19 19:56:02.17659033 +0000 UTC m=+2229.967256272" Feb 19 19:56:03 crc kubenswrapper[4787]: I0219 19:56:03.385649 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:56:03 crc kubenswrapper[4787]: I0219 19:56:03.386086 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:56:03 crc kubenswrapper[4787]: I0219 19:56:03.457002 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:56:04 crc kubenswrapper[4787]: I0219 19:56:04.244756 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:56:04 crc kubenswrapper[4787]: I0219 19:56:04.305542 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cjn"] Feb 19 19:56:06 crc kubenswrapper[4787]: I0219 19:56:06.191376 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m9cjn" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="registry-server" containerID="cri-o://b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae" gracePeriod=2 Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.722528 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.883493 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plmqw\" (UniqueName: \"kubernetes.io/projected/a8097062-04c9-4232-8096-4310d846f199-kube-api-access-plmqw\") pod \"a8097062-04c9-4232-8096-4310d846f199\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.883963 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-utilities\") pod \"a8097062-04c9-4232-8096-4310d846f199\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.884236 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-catalog-content\") pod \"a8097062-04c9-4232-8096-4310d846f199\" (UID: \"a8097062-04c9-4232-8096-4310d846f199\") " Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.884987 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-utilities" (OuterVolumeSpecName: "utilities") pod "a8097062-04c9-4232-8096-4310d846f199" (UID: "a8097062-04c9-4232-8096-4310d846f199"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.885522 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.892666 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:56:07 crc kubenswrapper[4787]: E0219 19:56:06.893026 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.900898 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8097062-04c9-4232-8096-4310d846f199-kube-api-access-plmqw" (OuterVolumeSpecName: "kube-api-access-plmqw") pod "a8097062-04c9-4232-8096-4310d846f199" (UID: "a8097062-04c9-4232-8096-4310d846f199"). InnerVolumeSpecName "kube-api-access-plmqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.914151 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8097062-04c9-4232-8096-4310d846f199" (UID: "a8097062-04c9-4232-8096-4310d846f199"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.989658 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8097062-04c9-4232-8096-4310d846f199-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:06.989709 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plmqw\" (UniqueName: \"kubernetes.io/projected/a8097062-04c9-4232-8096-4310d846f199-kube-api-access-plmqw\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.204918 4787 generic.go:334] "Generic (PLEG): container finished" podID="a8097062-04c9-4232-8096-4310d846f199" containerID="b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae" exitCode=0 Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.204978 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerDied","Data":"b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae"} Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.205004 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m9cjn" event={"ID":"a8097062-04c9-4232-8096-4310d846f199","Type":"ContainerDied","Data":"5dbb3aa1bff06996f69831c6e24fc76c5c2e0e76d18d419a6bd0c6dbcef6067a"} Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.205014 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m9cjn" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.205024 4787 scope.go:117] "RemoveContainer" containerID="b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.225511 4787 scope.go:117] "RemoveContainer" containerID="a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.255464 4787 scope.go:117] "RemoveContainer" containerID="74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.262122 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cjn"] Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.273104 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m9cjn"] Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.316849 4787 scope.go:117] "RemoveContainer" containerID="b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae" Feb 19 19:56:07 crc kubenswrapper[4787]: E0219 19:56:07.317471 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae\": container with ID starting with b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae not found: ID does not exist" containerID="b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.317600 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae"} err="failed to get container status \"b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae\": rpc error: code = NotFound desc = could not find container \"b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae\": container with ID starting with b398ef6424b45606fda03392ae1501097bc0f06e243e3de7cd02b596e0f457ae not found: ID does not exist" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.317739 4787 scope.go:117] "RemoveContainer" containerID="a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d" Feb 19 19:56:07 crc kubenswrapper[4787]: E0219 19:56:07.318222 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d\": container with ID starting with a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d not found: ID does not exist" containerID="a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.318251 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d"} err="failed to get container status \"a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d\": rpc error: code = NotFound desc = could not find container \"a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d\": container with ID starting with a21fcc41d9860d199f58b5be7e0f4838b8b5724950ab175566d833b3c6e2d59d not found: ID does not exist" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.318278 4787 scope.go:117] "RemoveContainer" containerID="74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50" Feb 19 19:56:07 crc kubenswrapper[4787]: E0219 19:56:07.318521 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50\": container with ID starting with 74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50 not found: ID does not exist" containerID="74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50" Feb 19 19:56:07 crc kubenswrapper[4787]: I0219 19:56:07.318544 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50"} err="failed to get container status \"74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50\": rpc error: code = NotFound desc = could not find container \"74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50\": container with ID starting with 74027053a01ee9fdc62d7198ad0fb8081ee2286e7cd8440cdac35dc71e1b7c50 not found: ID does not exist" Feb 19 19:56:08 crc kubenswrapper[4787]: I0219 19:56:08.904555 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8097062-04c9-4232-8096-4310d846f199" path="/var/lib/kubelet/pods/a8097062-04c9-4232-8096-4310d846f199/volumes" Feb 19 19:56:20 crc kubenswrapper[4787]: I0219 19:56:20.891793 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:56:20 crc kubenswrapper[4787]: E0219 19:56:20.892514 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:56:21 crc kubenswrapper[4787]: I0219 19:56:21.050978 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-xq7tp"] Feb 19 19:56:21 crc kubenswrapper[4787]: I0219 19:56:21.062752 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-xq7tp"] Feb 19 19:56:22 crc kubenswrapper[4787]: I0219 19:56:22.908018 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb810906-81bd-42b7-9a2b-0900059baba9" path="/var/lib/kubelet/pods/fb810906-81bd-42b7-9a2b-0900059baba9/volumes" Feb 19 19:56:32 crc kubenswrapper[4787]: I0219 19:56:32.904386 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:56:32 crc kubenswrapper[4787]: E0219 19:56:32.906434 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:56:43 crc kubenswrapper[4787]: I0219 19:56:43.636777 4787 generic.go:334] "Generic (PLEG): container finished" podID="7dbede29-862e-457a-a641-689836eab084" containerID="ee56359ce706b97cfbd77c24971eda27ce454ae2068eb870fc8e872645d1746e" exitCode=0 Feb 19 19:56:43 crc kubenswrapper[4787]: I0219 19:56:43.637192 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" event={"ID":"7dbede29-862e-457a-a641-689836eab084","Type":"ContainerDied","Data":"ee56359ce706b97cfbd77c24971eda27ce454ae2068eb870fc8e872645d1746e"} Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.094234 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210452 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ovn-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210513 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210573 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210632 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210675 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-neutron-metadata-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210699 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-libvirt-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210717 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-power-monitoring-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210763 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-nova-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210788 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210852 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g6b7\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-kube-api-access-2g6b7\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210880 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.210915 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-inventory\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.211045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.211095 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-repo-setup-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.211120 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-bootstrap-combined-ca-bundle\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.211137 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ssh-key-openstack-edpm-ipam\") pod \"7dbede29-862e-457a-a641-689836eab084\" (UID: \"7dbede29-862e-457a-a641-689836eab084\") " Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219233 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219455 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219713 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219777 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219898 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219920 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.219940 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.220709 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.220767 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.220770 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-kube-api-access-2g6b7" (OuterVolumeSpecName: "kube-api-access-2g6b7") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "kube-api-access-2g6b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.220936 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.221498 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.231965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.232025 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.253565 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.255562 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-inventory" (OuterVolumeSpecName: "inventory") pod "7dbede29-862e-457a-a641-689836eab084" (UID: "7dbede29-862e-457a-a641-689836eab084"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317014 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g6b7\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-kube-api-access-2g6b7\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317072 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317096 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317155 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317180 4787 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317201 4787 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317219 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317238 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317255 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317275 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317294 4787 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317313 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317331 4787 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317349 4787 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317368 4787 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbede29-862e-457a-a641-689836eab084-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.317386 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7dbede29-862e-457a-a641-689836eab084-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.657592 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" event={"ID":"7dbede29-862e-457a-a641-689836eab084","Type":"ContainerDied","Data":"4a09f39a20cd5108837900048b184830466be80f71b1b4681e400ded7b6c5582"} Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.657965 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a09f39a20cd5108837900048b184830466be80f71b1b4681e400ded7b6c5582" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.657875 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756040 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx"] Feb 19 19:56:45 crc kubenswrapper[4787]: E0219 19:56:45.756557 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="registry-server" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756573 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="registry-server" Feb 19 19:56:45 crc kubenswrapper[4787]: E0219 19:56:45.756591 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbede29-862e-457a-a641-689836eab084" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756599 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbede29-862e-457a-a641-689836eab084" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 19:56:45 crc kubenswrapper[4787]: E0219 19:56:45.756632 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="extract-content" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756638 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="extract-content" Feb 19 19:56:45 crc kubenswrapper[4787]: E0219 19:56:45.756681 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="extract-utilities" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756687 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="extract-utilities" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756898 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbede29-862e-457a-a641-689836eab084" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.756917 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8097062-04c9-4232-8096-4310d846f199" containerName="registry-server" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.757706 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.759778 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.760043 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.760318 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.760508 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.762021 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.781446 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx"] Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.930073 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.930159 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dbtw\" (UniqueName: \"kubernetes.io/projected/0a8fa017-7d8f-49d2-bd24-2a65b206e279-kube-api-access-9dbtw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.930452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.930949 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:45 crc kubenswrapper[4787]: I0219 19:56:45.930992 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.033546 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.033641 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.033695 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.033820 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dbtw\" (UniqueName: \"kubernetes.io/projected/0a8fa017-7d8f-49d2-bd24-2a65b206e279-kube-api-access-9dbtw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.033910 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.034641 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.039826 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.039838 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.045367 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.058047 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dbtw\" (UniqueName: \"kubernetes.io/projected/0a8fa017-7d8f-49d2-bd24-2a65b206e279-kube-api-access-9dbtw\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fz5rx\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.077758 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.616005 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx"] Feb 19 19:56:46 crc kubenswrapper[4787]: I0219 19:56:46.667592 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" event={"ID":"0a8fa017-7d8f-49d2-bd24-2a65b206e279","Type":"ContainerStarted","Data":"1919a1eee5faa4da2b8b9929a3ba0d2f8b021da933eb73971989597938646d89"} Feb 19 19:56:47 crc kubenswrapper[4787]: I0219 19:56:47.682293 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" event={"ID":"0a8fa017-7d8f-49d2-bd24-2a65b206e279","Type":"ContainerStarted","Data":"9157d921113ee71fb1a5d54841eeb02360b01bf8c09b69c3398a2908531164cd"} Feb 19 19:56:47 crc kubenswrapper[4787]: I0219 19:56:47.722429 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" podStartSLOduration=2.3393710309999998 podStartE2EDuration="2.722408229s" podCreationTimestamp="2026-02-19 19:56:45 +0000 UTC" firstStartedPulling="2026-02-19 19:56:46.627223885 +0000 UTC m=+2274.417889827" lastFinishedPulling="2026-02-19 19:56:47.010261083 +0000 UTC m=+2274.800927025" observedRunningTime="2026-02-19 19:56:47.702802062 +0000 UTC m=+2275.493468084" watchObservedRunningTime="2026-02-19 19:56:47.722408229 +0000 UTC m=+2275.513074181" Feb 19 19:56:47 crc kubenswrapper[4787]: I0219 19:56:47.892062 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:56:47 crc kubenswrapper[4787]: E0219 19:56:47.893489 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:56:59 crc kubenswrapper[4787]: I0219 19:56:59.049333 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-bpfk6"] Feb 19 19:56:59 crc kubenswrapper[4787]: I0219 19:56:59.063292 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-bpfk6"] Feb 19 19:57:00 crc kubenswrapper[4787]: I0219 19:57:00.893343 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:57:00 crc kubenswrapper[4787]: E0219 19:57:00.894143 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:57:00 crc kubenswrapper[4787]: I0219 19:57:00.906850 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28778f5-042a-4379-b738-86ef9f31a6df" path="/var/lib/kubelet/pods/b28778f5-042a-4379-b738-86ef9f31a6df/volumes" Feb 19 19:57:08 crc kubenswrapper[4787]: I0219 19:57:08.467037 4787 scope.go:117] "RemoveContainer" containerID="f6c62747bcaf8d76fa0433c3424c857b1c71872248c5f7e7a223fecdcc2f788d" Feb 19 19:57:08 crc kubenswrapper[4787]: I0219 19:57:08.498757 4787 scope.go:117] "RemoveContainer" containerID="9920d194475cf3325ff71e36f0b05d0e5d583bcdc33a14daeffdeca311cc5616" Feb 19 19:57:13 crc kubenswrapper[4787]: I0219 19:57:13.894526 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:57:13 crc kubenswrapper[4787]: E0219 19:57:13.898719 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:57:25 crc kubenswrapper[4787]: I0219 19:57:25.893287 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:57:25 crc kubenswrapper[4787]: E0219 19:57:25.894248 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:57:36 crc kubenswrapper[4787]: I0219 19:57:36.892102 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:57:36 crc kubenswrapper[4787]: E0219 19:57:36.892974 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:57:45 crc kubenswrapper[4787]: I0219 19:57:45.271909 4787 generic.go:334] "Generic (PLEG): container finished" podID="0a8fa017-7d8f-49d2-bd24-2a65b206e279" containerID="9157d921113ee71fb1a5d54841eeb02360b01bf8c09b69c3398a2908531164cd" exitCode=0 Feb 19 19:57:45 crc kubenswrapper[4787]: I0219 19:57:45.272002 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" event={"ID":"0a8fa017-7d8f-49d2-bd24-2a65b206e279","Type":"ContainerDied","Data":"9157d921113ee71fb1a5d54841eeb02360b01bf8c09b69c3398a2908531164cd"} Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.737771 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.862077 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-inventory\") pod \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.862303 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ssh-key-openstack-edpm-ipam\") pod \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.862337 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovn-combined-ca-bundle\") pod \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.862411 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovncontroller-config-0\") pod \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.862495 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dbtw\" (UniqueName: \"kubernetes.io/projected/0a8fa017-7d8f-49d2-bd24-2a65b206e279-kube-api-access-9dbtw\") pod \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\" (UID: \"0a8fa017-7d8f-49d2-bd24-2a65b206e279\") " Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.868152 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0a8fa017-7d8f-49d2-bd24-2a65b206e279" (UID: "0a8fa017-7d8f-49d2-bd24-2a65b206e279"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.874049 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8fa017-7d8f-49d2-bd24-2a65b206e279-kube-api-access-9dbtw" (OuterVolumeSpecName: "kube-api-access-9dbtw") pod "0a8fa017-7d8f-49d2-bd24-2a65b206e279" (UID: "0a8fa017-7d8f-49d2-bd24-2a65b206e279"). InnerVolumeSpecName "kube-api-access-9dbtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.891119 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0a8fa017-7d8f-49d2-bd24-2a65b206e279" (UID: "0a8fa017-7d8f-49d2-bd24-2a65b206e279"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.893353 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a8fa017-7d8f-49d2-bd24-2a65b206e279" (UID: "0a8fa017-7d8f-49d2-bd24-2a65b206e279"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.900121 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-inventory" (OuterVolumeSpecName: "inventory") pod "0a8fa017-7d8f-49d2-bd24-2a65b206e279" (UID: "0a8fa017-7d8f-49d2-bd24-2a65b206e279"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.966935 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.967943 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.968016 4787 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.968068 4787 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8fa017-7d8f-49d2-bd24-2a65b206e279-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:46 crc kubenswrapper[4787]: I0219 19:57:46.968132 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dbtw\" (UniqueName: \"kubernetes.io/projected/0a8fa017-7d8f-49d2-bd24-2a65b206e279-kube-api-access-9dbtw\") on node \"crc\" DevicePath \"\"" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.290811 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" event={"ID":"0a8fa017-7d8f-49d2-bd24-2a65b206e279","Type":"ContainerDied","Data":"1919a1eee5faa4da2b8b9929a3ba0d2f8b021da933eb73971989597938646d89"} Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.290852 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1919a1eee5faa4da2b8b9929a3ba0d2f8b021da933eb73971989597938646d89" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.290901 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fz5rx" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.379662 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b"] Feb 19 19:57:47 crc kubenswrapper[4787]: E0219 19:57:47.380278 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8fa017-7d8f-49d2-bd24-2a65b206e279" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.380298 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8fa017-7d8f-49d2-bd24-2a65b206e279" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.380531 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8fa017-7d8f-49d2-bd24-2a65b206e279" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.381459 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.383456 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.383723 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.383896 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.384052 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.384209 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.385913 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.389860 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b"] Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.478496 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.478565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.478919 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.479174 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fztt\" (UniqueName: \"kubernetes.io/projected/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-kube-api-access-7fztt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.479578 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.479841 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.581514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.581596 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.581708 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.581763 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fztt\" (UniqueName: \"kubernetes.io/projected/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-kube-api-access-7fztt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.581807 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.581862 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.585940 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.586125 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.586383 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.593857 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.598100 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.607369 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fztt\" (UniqueName: \"kubernetes.io/projected/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-kube-api-access-7fztt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:47 crc kubenswrapper[4787]: I0219 19:57:47.765343 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:57:48 crc kubenswrapper[4787]: I0219 19:57:48.310017 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b"] Feb 19 19:57:48 crc kubenswrapper[4787]: W0219 19:57:48.318175 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1a533e_0d5d_4b5d_816d_81fb1bf769be.slice/crio-5d59086ab592e73f729a95c53588bb9e8847f9ebed3b76679420515a9f426d2b WatchSource:0}: Error finding container 5d59086ab592e73f729a95c53588bb9e8847f9ebed3b76679420515a9f426d2b: Status 404 returned error can't find the container with id 5d59086ab592e73f729a95c53588bb9e8847f9ebed3b76679420515a9f426d2b Feb 19 19:57:48 crc kubenswrapper[4787]: I0219 19:57:48.892346 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:57:48 crc kubenswrapper[4787]: E0219 19:57:48.893019 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:57:49 crc kubenswrapper[4787]: I0219 19:57:49.313806 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" event={"ID":"dd1a533e-0d5d-4b5d-816d-81fb1bf769be","Type":"ContainerStarted","Data":"3877d2cfbddd5910a12c758ce20415137d9ff05c42ca34f0b8374aec1fd34a59"} Feb 19 19:57:49 crc kubenswrapper[4787]: I0219 19:57:49.314133 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" event={"ID":"dd1a533e-0d5d-4b5d-816d-81fb1bf769be","Type":"ContainerStarted","Data":"5d59086ab592e73f729a95c53588bb9e8847f9ebed3b76679420515a9f426d2b"} Feb 19 19:57:49 crc kubenswrapper[4787]: I0219 19:57:49.332310 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" podStartSLOduration=1.9087737740000001 podStartE2EDuration="2.332289347s" podCreationTimestamp="2026-02-19 19:57:47 +0000 UTC" firstStartedPulling="2026-02-19 19:57:48.322466871 +0000 UTC m=+2336.113132813" lastFinishedPulling="2026-02-19 19:57:48.745982444 +0000 UTC m=+2336.536648386" observedRunningTime="2026-02-19 19:57:49.331998729 +0000 UTC m=+2337.122664691" watchObservedRunningTime="2026-02-19 19:57:49.332289347 +0000 UTC m=+2337.122955289" Feb 19 19:58:03 crc kubenswrapper[4787]: I0219 19:58:03.892701 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:58:03 crc kubenswrapper[4787]: E0219 19:58:03.893536 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:58:18 crc kubenswrapper[4787]: I0219 19:58:18.896663 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:58:18 crc kubenswrapper[4787]: E0219 19:58:18.897562 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:58:30 crc kubenswrapper[4787]: I0219 19:58:30.893132 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:58:30 crc kubenswrapper[4787]: E0219 19:58:30.894015 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:58:31 crc kubenswrapper[4787]: I0219 19:58:31.788721 4787 generic.go:334] "Generic (PLEG): container finished" podID="dd1a533e-0d5d-4b5d-816d-81fb1bf769be" containerID="3877d2cfbddd5910a12c758ce20415137d9ff05c42ca34f0b8374aec1fd34a59" exitCode=0 Feb 19 19:58:31 crc kubenswrapper[4787]: I0219 19:58:31.788760 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" event={"ID":"dd1a533e-0d5d-4b5d-816d-81fb1bf769be","Type":"ContainerDied","Data":"3877d2cfbddd5910a12c758ce20415137d9ff05c42ca34f0b8374aec1fd34a59"} Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.229226 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.408091 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-ssh-key-openstack-edpm-ipam\") pod \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.408215 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-ovn-metadata-agent-neutron-config-0\") pod \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.408335 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fztt\" (UniqueName: \"kubernetes.io/projected/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-kube-api-access-7fztt\") pod \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.408370 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-nova-metadata-neutron-config-0\") pod \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.408490 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-inventory\") pod \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.408519 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-metadata-combined-ca-bundle\") pod \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\" (UID: \"dd1a533e-0d5d-4b5d-816d-81fb1bf769be\") " Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.414011 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-kube-api-access-7fztt" (OuterVolumeSpecName: "kube-api-access-7fztt") pod "dd1a533e-0d5d-4b5d-816d-81fb1bf769be" (UID: "dd1a533e-0d5d-4b5d-816d-81fb1bf769be"). InnerVolumeSpecName "kube-api-access-7fztt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.414379 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dd1a533e-0d5d-4b5d-816d-81fb1bf769be" (UID: "dd1a533e-0d5d-4b5d-816d-81fb1bf769be"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.443678 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "dd1a533e-0d5d-4b5d-816d-81fb1bf769be" (UID: "dd1a533e-0d5d-4b5d-816d-81fb1bf769be"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.444290 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "dd1a533e-0d5d-4b5d-816d-81fb1bf769be" (UID: "dd1a533e-0d5d-4b5d-816d-81fb1bf769be"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.445364 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd1a533e-0d5d-4b5d-816d-81fb1bf769be" (UID: "dd1a533e-0d5d-4b5d-816d-81fb1bf769be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.446882 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-inventory" (OuterVolumeSpecName: "inventory") pod "dd1a533e-0d5d-4b5d-816d-81fb1bf769be" (UID: "dd1a533e-0d5d-4b5d-816d-81fb1bf769be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.512349 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.512384 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.512398 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fztt\" (UniqueName: \"kubernetes.io/projected/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-kube-api-access-7fztt\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.512407 4787 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.512417 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.512440 4787 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a533e-0d5d-4b5d-816d-81fb1bf769be-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.813522 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" event={"ID":"dd1a533e-0d5d-4b5d-816d-81fb1bf769be","Type":"ContainerDied","Data":"5d59086ab592e73f729a95c53588bb9e8847f9ebed3b76679420515a9f426d2b"} Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.813562 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d59086ab592e73f729a95c53588bb9e8847f9ebed3b76679420515a9f426d2b" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.813631 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.995887 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26"] Feb 19 19:58:33 crc kubenswrapper[4787]: E0219 19:58:33.996738 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1a533e-0d5d-4b5d-816d-81fb1bf769be" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.996761 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1a533e-0d5d-4b5d-816d-81fb1bf769be" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.997020 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1a533e-0d5d-4b5d-816d-81fb1bf769be" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 19:58:33 crc kubenswrapper[4787]: I0219 19:58:33.998280 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.000708 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.000739 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.001501 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.001570 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.002307 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.009497 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26"] Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.126893 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.127208 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.127591 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.127805 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.127976 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k82j8\" (UniqueName: \"kubernetes.io/projected/d51bd7f7-9324-441b-b8f4-1ebde86c404f-kube-api-access-k82j8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.229162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.229244 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.229312 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k82j8\" (UniqueName: \"kubernetes.io/projected/d51bd7f7-9324-441b-b8f4-1ebde86c404f-kube-api-access-k82j8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.229362 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.229452 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.234337 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.234468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.236779 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.236817 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.246468 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k82j8\" (UniqueName: \"kubernetes.io/projected/d51bd7f7-9324-441b-b8f4-1ebde86c404f-kube-api-access-k82j8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-l8q26\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.332931 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 19:58:34 crc kubenswrapper[4787]: I0219 19:58:34.889951 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26"] Feb 19 19:58:35 crc kubenswrapper[4787]: I0219 19:58:35.836006 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" event={"ID":"d51bd7f7-9324-441b-b8f4-1ebde86c404f","Type":"ContainerStarted","Data":"6554c5d7184f0a8412e0beadbc4437b3b60dbe458a7cb4816885227207029850"} Feb 19 19:58:35 crc kubenswrapper[4787]: I0219 19:58:35.836447 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" event={"ID":"d51bd7f7-9324-441b-b8f4-1ebde86c404f","Type":"ContainerStarted","Data":"47f3a24144992478ff0b36d979d40831466f7ec45f7220a5aa98b9bd9177a8c1"} Feb 19 19:58:35 crc kubenswrapper[4787]: I0219 19:58:35.855016 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" podStartSLOduration=2.450223474 podStartE2EDuration="2.854990504s" podCreationTimestamp="2026-02-19 19:58:33 +0000 UTC" firstStartedPulling="2026-02-19 19:58:34.894162111 +0000 UTC m=+2382.684828063" lastFinishedPulling="2026-02-19 19:58:35.298929151 +0000 UTC m=+2383.089595093" observedRunningTime="2026-02-19 19:58:35.854529821 +0000 UTC m=+2383.645195763" watchObservedRunningTime="2026-02-19 19:58:35.854990504 +0000 UTC m=+2383.645656466" Feb 19 19:58:44 crc kubenswrapper[4787]: I0219 19:58:44.893044 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:58:44 crc kubenswrapper[4787]: E0219 19:58:44.894284 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:58:58 crc kubenswrapper[4787]: I0219 19:58:58.892669 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:58:58 crc kubenswrapper[4787]: E0219 19:58:58.893553 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.246221 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7q7qb"] Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.249424 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.264913 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q7qb"] Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.388698 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-utilities\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.388795 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-catalog-content\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.389109 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxpcx\" (UniqueName: \"kubernetes.io/projected/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-kube-api-access-bxpcx\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.491327 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxpcx\" (UniqueName: \"kubernetes.io/projected/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-kube-api-access-bxpcx\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.491448 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-utilities\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.491528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-catalog-content\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.492218 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-utilities\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.492242 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-catalog-content\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.515556 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxpcx\" (UniqueName: \"kubernetes.io/projected/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-kube-api-access-bxpcx\") pod \"community-operators-7q7qb\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:00 crc kubenswrapper[4787]: I0219 19:59:00.591276 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:01 crc kubenswrapper[4787]: I0219 19:59:01.135103 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q7qb"] Feb 19 19:59:02 crc kubenswrapper[4787]: I0219 19:59:02.124534 4787 generic.go:334] "Generic (PLEG): container finished" podID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerID="b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b" exitCode=0 Feb 19 19:59:02 crc kubenswrapper[4787]: I0219 19:59:02.124626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerDied","Data":"b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b"} Feb 19 19:59:02 crc kubenswrapper[4787]: I0219 19:59:02.125381 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerStarted","Data":"25cb1a4d7d9d2871d14adeaa8b9ebcfc71bb1d667481dc1daa5b27b962324c9d"} Feb 19 19:59:03 crc kubenswrapper[4787]: I0219 19:59:03.154174 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerStarted","Data":"d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc"} Feb 19 19:59:05 crc kubenswrapper[4787]: I0219 19:59:05.181162 4787 generic.go:334] "Generic (PLEG): container finished" podID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerID="d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc" exitCode=0 Feb 19 19:59:05 crc kubenswrapper[4787]: I0219 19:59:05.181253 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerDied","Data":"d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc"} Feb 19 19:59:06 crc kubenswrapper[4787]: I0219 19:59:06.194080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerStarted","Data":"cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080"} Feb 19 19:59:06 crc kubenswrapper[4787]: I0219 19:59:06.209579 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7q7qb" podStartSLOduration=2.757539222 podStartE2EDuration="6.209559869s" podCreationTimestamp="2026-02-19 19:59:00 +0000 UTC" firstStartedPulling="2026-02-19 19:59:02.128850483 +0000 UTC m=+2409.919516435" lastFinishedPulling="2026-02-19 19:59:05.58087113 +0000 UTC m=+2413.371537082" observedRunningTime="2026-02-19 19:59:06.209542478 +0000 UTC m=+2414.000208420" watchObservedRunningTime="2026-02-19 19:59:06.209559869 +0000 UTC m=+2414.000225811" Feb 19 19:59:08 crc kubenswrapper[4787]: I0219 19:59:08.637197 4787 scope.go:117] "RemoveContainer" containerID="cbbe8cfb56c79ff3dc68d86eead317e2d59c3da703bc7aa9f14286c9b05bf9b9" Feb 19 19:59:08 crc kubenswrapper[4787]: I0219 19:59:08.659570 4787 scope.go:117] "RemoveContainer" containerID="4ad6bebd19c5fab1143b2f264173b17aa268154215c187b612d8eab271d51882" Feb 19 19:59:08 crc kubenswrapper[4787]: I0219 19:59:08.682662 4787 scope.go:117] "RemoveContainer" containerID="06dff87ce3690be9716e4f9f95229a2780f5c6382fa04d185b96d2ad646a799e" Feb 19 19:59:10 crc kubenswrapper[4787]: I0219 19:59:10.592255 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:10 crc kubenswrapper[4787]: I0219 19:59:10.592757 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:10 crc kubenswrapper[4787]: I0219 19:59:10.648247 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:11 crc kubenswrapper[4787]: I0219 19:59:11.318926 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:11 crc kubenswrapper[4787]: I0219 19:59:11.376208 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7q7qb"] Feb 19 19:59:11 crc kubenswrapper[4787]: I0219 19:59:11.892590 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:59:11 crc kubenswrapper[4787]: E0219 19:59:11.893046 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.280754 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7q7qb" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="registry-server" containerID="cri-o://cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080" gracePeriod=2 Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.845261 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.948397 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-catalog-content\") pod \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.948537 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-utilities\") pod \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.948631 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxpcx\" (UniqueName: \"kubernetes.io/projected/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-kube-api-access-bxpcx\") pod \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\" (UID: \"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c\") " Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.949829 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-utilities" (OuterVolumeSpecName: "utilities") pod "71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" (UID: "71af5b5c-2e8b-47a6-885a-e3e892dd5a5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:59:13 crc kubenswrapper[4787]: I0219 19:59:13.953901 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-kube-api-access-bxpcx" (OuterVolumeSpecName: "kube-api-access-bxpcx") pod "71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" (UID: "71af5b5c-2e8b-47a6-885a-e3e892dd5a5c"). InnerVolumeSpecName "kube-api-access-bxpcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.009683 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" (UID: "71af5b5c-2e8b-47a6-885a-e3e892dd5a5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.052177 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.052381 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.052461 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxpcx\" (UniqueName: \"kubernetes.io/projected/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c-kube-api-access-bxpcx\") on node \"crc\" DevicePath \"\"" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.298332 4787 generic.go:334] "Generic (PLEG): container finished" podID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerID="cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080" exitCode=0 Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.298647 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerDied","Data":"cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080"} Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.298675 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q7qb" event={"ID":"71af5b5c-2e8b-47a6-885a-e3e892dd5a5c","Type":"ContainerDied","Data":"25cb1a4d7d9d2871d14adeaa8b9ebcfc71bb1d667481dc1daa5b27b962324c9d"} Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.298703 4787 scope.go:117] "RemoveContainer" containerID="cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.298869 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q7qb" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.338668 4787 scope.go:117] "RemoveContainer" containerID="d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.347421 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7q7qb"] Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.359763 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7q7qb"] Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.367000 4787 scope.go:117] "RemoveContainer" containerID="b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.467010 4787 scope.go:117] "RemoveContainer" containerID="cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080" Feb 19 19:59:14 crc kubenswrapper[4787]: E0219 19:59:14.467466 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080\": container with ID starting with cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080 not found: ID does not exist" containerID="cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.467505 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080"} err="failed to get container status \"cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080\": rpc error: code = NotFound desc = could not find container \"cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080\": container with ID starting with cd9dc60a1601cbce42f6bad4e71930ac60db56911a10bb6a9aa40bd347700080 not found: ID does not exist" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.467526 4787 scope.go:117] "RemoveContainer" containerID="d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc" Feb 19 19:59:14 crc kubenswrapper[4787]: E0219 19:59:14.467867 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc\": container with ID starting with d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc not found: ID does not exist" containerID="d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.467902 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc"} err="failed to get container status \"d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc\": rpc error: code = NotFound desc = could not find container \"d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc\": container with ID starting with d02a198c63448761f5532b8972a62f1db482896b2651cf3019a84b2e914169cc not found: ID does not exist" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.467914 4787 scope.go:117] "RemoveContainer" containerID="b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b" Feb 19 19:59:14 crc kubenswrapper[4787]: E0219 19:59:14.468106 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b\": container with ID starting with b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b not found: ID does not exist" containerID="b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.468133 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b"} err="failed to get container status \"b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b\": rpc error: code = NotFound desc = could not find container \"b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b\": container with ID starting with b9e7b362e6cbeca7a49422887c4fa9cdccd9aaef8eede18fa797ec441d12b75b not found: ID does not exist" Feb 19 19:59:14 crc kubenswrapper[4787]: I0219 19:59:14.905758 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" path="/var/lib/kubelet/pods/71af5b5c-2e8b-47a6-885a-e3e892dd5a5c/volumes" Feb 19 19:59:24 crc kubenswrapper[4787]: I0219 19:59:24.892171 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:59:24 crc kubenswrapper[4787]: E0219 19:59:24.892984 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:59:35 crc kubenswrapper[4787]: I0219 19:59:35.892207 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:59:35 crc kubenswrapper[4787]: E0219 19:59:35.893104 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:59:46 crc kubenswrapper[4787]: I0219 19:59:46.893090 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:59:46 crc kubenswrapper[4787]: E0219 19:59:46.893887 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 19:59:58 crc kubenswrapper[4787]: I0219 19:59:58.892554 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 19:59:58 crc kubenswrapper[4787]: E0219 19:59:58.893785 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.154730 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r"] Feb 19 20:00:00 crc kubenswrapper[4787]: E0219 20:00:00.155357 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="extract-utilities" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.155376 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="extract-utilities" Feb 19 20:00:00 crc kubenswrapper[4787]: E0219 20:00:00.155397 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="registry-server" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.155405 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="registry-server" Feb 19 20:00:00 crc kubenswrapper[4787]: E0219 20:00:00.155420 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="extract-content" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.155427 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="extract-content" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.155785 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="71af5b5c-2e8b-47a6-885a-e3e892dd5a5c" containerName="registry-server" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.156859 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.159220 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.159332 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.171242 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r"] Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.251798 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6296235e-0968-4e57-9f65-2b17d626a241-secret-volume\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.251873 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqhn\" (UniqueName: \"kubernetes.io/projected/6296235e-0968-4e57-9f65-2b17d626a241-kube-api-access-pzqhn\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.251952 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6296235e-0968-4e57-9f65-2b17d626a241-config-volume\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.353998 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6296235e-0968-4e57-9f65-2b17d626a241-secret-volume\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.354062 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqhn\" (UniqueName: \"kubernetes.io/projected/6296235e-0968-4e57-9f65-2b17d626a241-kube-api-access-pzqhn\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.354138 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6296235e-0968-4e57-9f65-2b17d626a241-config-volume\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.354948 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6296235e-0968-4e57-9f65-2b17d626a241-config-volume\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.364203 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6296235e-0968-4e57-9f65-2b17d626a241-secret-volume\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.372446 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqhn\" (UniqueName: \"kubernetes.io/projected/6296235e-0968-4e57-9f65-2b17d626a241-kube-api-access-pzqhn\") pod \"collect-profiles-29525520-rks2r\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.481008 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:00 crc kubenswrapper[4787]: I0219 20:00:00.944224 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r"] Feb 19 20:00:01 crc kubenswrapper[4787]: I0219 20:00:01.802792 4787 generic.go:334] "Generic (PLEG): container finished" podID="6296235e-0968-4e57-9f65-2b17d626a241" containerID="9c9fcf288f186da605839b59928711fb1ca3c9a0b4b8771b8d602f544c80d976" exitCode=0 Feb 19 20:00:01 crc kubenswrapper[4787]: I0219 20:00:01.803095 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" event={"ID":"6296235e-0968-4e57-9f65-2b17d626a241","Type":"ContainerDied","Data":"9c9fcf288f186da605839b59928711fb1ca3c9a0b4b8771b8d602f544c80d976"} Feb 19 20:00:01 crc kubenswrapper[4787]: I0219 20:00:01.803119 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" event={"ID":"6296235e-0968-4e57-9f65-2b17d626a241","Type":"ContainerStarted","Data":"e69856aca40d5ef52fae55a5fc70858ff631a756273d1c369c15990fdbdf4ee3"} Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.305404 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.426388 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6296235e-0968-4e57-9f65-2b17d626a241-secret-volume\") pod \"6296235e-0968-4e57-9f65-2b17d626a241\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.426583 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzqhn\" (UniqueName: \"kubernetes.io/projected/6296235e-0968-4e57-9f65-2b17d626a241-kube-api-access-pzqhn\") pod \"6296235e-0968-4e57-9f65-2b17d626a241\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.426776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6296235e-0968-4e57-9f65-2b17d626a241-config-volume\") pod \"6296235e-0968-4e57-9f65-2b17d626a241\" (UID: \"6296235e-0968-4e57-9f65-2b17d626a241\") " Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.427371 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6296235e-0968-4e57-9f65-2b17d626a241-config-volume" (OuterVolumeSpecName: "config-volume") pod "6296235e-0968-4e57-9f65-2b17d626a241" (UID: "6296235e-0968-4e57-9f65-2b17d626a241"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.434875 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6296235e-0968-4e57-9f65-2b17d626a241-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6296235e-0968-4e57-9f65-2b17d626a241" (UID: "6296235e-0968-4e57-9f65-2b17d626a241"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.436356 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6296235e-0968-4e57-9f65-2b17d626a241-kube-api-access-pzqhn" (OuterVolumeSpecName: "kube-api-access-pzqhn") pod "6296235e-0968-4e57-9f65-2b17d626a241" (UID: "6296235e-0968-4e57-9f65-2b17d626a241"). InnerVolumeSpecName "kube-api-access-pzqhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.529026 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzqhn\" (UniqueName: \"kubernetes.io/projected/6296235e-0968-4e57-9f65-2b17d626a241-kube-api-access-pzqhn\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.529058 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6296235e-0968-4e57-9f65-2b17d626a241-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.529069 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6296235e-0968-4e57-9f65-2b17d626a241-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.821670 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" event={"ID":"6296235e-0968-4e57-9f65-2b17d626a241","Type":"ContainerDied","Data":"e69856aca40d5ef52fae55a5fc70858ff631a756273d1c369c15990fdbdf4ee3"} Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.821731 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r" Feb 19 20:00:03 crc kubenswrapper[4787]: I0219 20:00:03.821736 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e69856aca40d5ef52fae55a5fc70858ff631a756273d1c369c15990fdbdf4ee3" Feb 19 20:00:04 crc kubenswrapper[4787]: I0219 20:00:04.396039 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx"] Feb 19 20:00:04 crc kubenswrapper[4787]: I0219 20:00:04.410767 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525475-bdlsx"] Feb 19 20:00:04 crc kubenswrapper[4787]: I0219 20:00:04.907069 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35db86a-2b97-4f8b-bb87-6ab2b004d5e5" path="/var/lib/kubelet/pods/e35db86a-2b97-4f8b-bb87-6ab2b004d5e5/volumes" Feb 19 20:00:08 crc kubenswrapper[4787]: I0219 20:00:08.786409 4787 scope.go:117] "RemoveContainer" containerID="f0b255258f985c70a9692a8f775373e24765d43a1f661523ae7d20850feb00ac" Feb 19 20:00:13 crc kubenswrapper[4787]: I0219 20:00:13.892106 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 20:00:14 crc kubenswrapper[4787]: I0219 20:00:14.936525 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"72a8a65f261025714c105ad6cfbe05c8731adf0dbe3eb30d70b836574416bb23"} Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.149567 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525521-pkssb"] Feb 19 20:01:00 crc kubenswrapper[4787]: E0219 20:01:00.150640 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6296235e-0968-4e57-9f65-2b17d626a241" containerName="collect-profiles" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.150654 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="6296235e-0968-4e57-9f65-2b17d626a241" containerName="collect-profiles" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.150939 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="6296235e-0968-4e57-9f65-2b17d626a241" containerName="collect-profiles" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.151818 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.160166 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525521-pkssb"] Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.301494 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p47g\" (UniqueName: \"kubernetes.io/projected/468d7d08-0fac-4e00-a1c0-c244a2b39aee-kube-api-access-6p47g\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.302086 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-config-data\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.302242 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-fernet-keys\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.302320 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-combined-ca-bundle\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.404583 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-fernet-keys\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.404676 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-combined-ca-bundle\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.404725 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p47g\" (UniqueName: \"kubernetes.io/projected/468d7d08-0fac-4e00-a1c0-c244a2b39aee-kube-api-access-6p47g\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.404860 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-config-data\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.411717 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-fernet-keys\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.411740 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-combined-ca-bundle\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.429651 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-config-data\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.431407 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p47g\" (UniqueName: \"kubernetes.io/projected/468d7d08-0fac-4e00-a1c0-c244a2b39aee-kube-api-access-6p47g\") pod \"keystone-cron-29525521-pkssb\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.475700 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:00 crc kubenswrapper[4787]: I0219 20:01:00.957226 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525521-pkssb"] Feb 19 20:01:01 crc kubenswrapper[4787]: I0219 20:01:01.546687 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-pkssb" event={"ID":"468d7d08-0fac-4e00-a1c0-c244a2b39aee","Type":"ContainerStarted","Data":"74e196cc5ab276121f5a26c3b0a706bcb070b7a817bff05da8a200723c06b353"} Feb 19 20:01:01 crc kubenswrapper[4787]: I0219 20:01:01.547015 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-pkssb" event={"ID":"468d7d08-0fac-4e00-a1c0-c244a2b39aee","Type":"ContainerStarted","Data":"59aa96082bacf2ffde6ba613dfc0ec89d191f53d7caa5cbbcaa9e764eaf36dd8"} Feb 19 20:01:01 crc kubenswrapper[4787]: I0219 20:01:01.568779 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525521-pkssb" podStartSLOduration=1.568592034 podStartE2EDuration="1.568592034s" podCreationTimestamp="2026-02-19 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:01:01.559690011 +0000 UTC m=+2529.350355973" watchObservedRunningTime="2026-02-19 20:01:01.568592034 +0000 UTC m=+2529.359257976" Feb 19 20:01:04 crc kubenswrapper[4787]: I0219 20:01:04.575596 4787 generic.go:334] "Generic (PLEG): container finished" podID="468d7d08-0fac-4e00-a1c0-c244a2b39aee" containerID="74e196cc5ab276121f5a26c3b0a706bcb070b7a817bff05da8a200723c06b353" exitCode=0 Feb 19 20:01:04 crc kubenswrapper[4787]: I0219 20:01:04.575648 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-pkssb" event={"ID":"468d7d08-0fac-4e00-a1c0-c244a2b39aee","Type":"ContainerDied","Data":"74e196cc5ab276121f5a26c3b0a706bcb070b7a817bff05da8a200723c06b353"} Feb 19 20:01:05 crc kubenswrapper[4787]: I0219 20:01:05.979863 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.045894 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-combined-ca-bundle\") pod \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.045990 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-fernet-keys\") pod \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.046033 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p47g\" (UniqueName: \"kubernetes.io/projected/468d7d08-0fac-4e00-a1c0-c244a2b39aee-kube-api-access-6p47g\") pod \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.046073 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-config-data\") pod \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\" (UID: \"468d7d08-0fac-4e00-a1c0-c244a2b39aee\") " Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.051856 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468d7d08-0fac-4e00-a1c0-c244a2b39aee-kube-api-access-6p47g" (OuterVolumeSpecName: "kube-api-access-6p47g") pod "468d7d08-0fac-4e00-a1c0-c244a2b39aee" (UID: "468d7d08-0fac-4e00-a1c0-c244a2b39aee"). InnerVolumeSpecName "kube-api-access-6p47g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.063974 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "468d7d08-0fac-4e00-a1c0-c244a2b39aee" (UID: "468d7d08-0fac-4e00-a1c0-c244a2b39aee"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.078792 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "468d7d08-0fac-4e00-a1c0-c244a2b39aee" (UID: "468d7d08-0fac-4e00-a1c0-c244a2b39aee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.122996 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-config-data" (OuterVolumeSpecName: "config-data") pod "468d7d08-0fac-4e00-a1c0-c244a2b39aee" (UID: "468d7d08-0fac-4e00-a1c0-c244a2b39aee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.148646 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.148678 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.148688 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p47g\" (UniqueName: \"kubernetes.io/projected/468d7d08-0fac-4e00-a1c0-c244a2b39aee-kube-api-access-6p47g\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.148698 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d08-0fac-4e00-a1c0-c244a2b39aee-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.598714 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525521-pkssb" event={"ID":"468d7d08-0fac-4e00-a1c0-c244a2b39aee","Type":"ContainerDied","Data":"59aa96082bacf2ffde6ba613dfc0ec89d191f53d7caa5cbbcaa9e764eaf36dd8"} Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.599044 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59aa96082bacf2ffde6ba613dfc0ec89d191f53d7caa5cbbcaa9e764eaf36dd8" Feb 19 20:01:06 crc kubenswrapper[4787]: I0219 20:01:06.598769 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525521-pkssb" Feb 19 20:02:06 crc kubenswrapper[4787]: I0219 20:02:06.179267 4787 generic.go:334] "Generic (PLEG): container finished" podID="d51bd7f7-9324-441b-b8f4-1ebde86c404f" containerID="6554c5d7184f0a8412e0beadbc4437b3b60dbe458a7cb4816885227207029850" exitCode=0 Feb 19 20:02:06 crc kubenswrapper[4787]: I0219 20:02:06.179320 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" event={"ID":"d51bd7f7-9324-441b-b8f4-1ebde86c404f","Type":"ContainerDied","Data":"6554c5d7184f0a8412e0beadbc4437b3b60dbe458a7cb4816885227207029850"} Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.670772 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.785032 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-combined-ca-bundle\") pod \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.785164 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-inventory\") pod \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.785997 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-secret-0\") pod \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.786049 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-ssh-key-openstack-edpm-ipam\") pod \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.786325 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k82j8\" (UniqueName: \"kubernetes.io/projected/d51bd7f7-9324-441b-b8f4-1ebde86c404f-kube-api-access-k82j8\") pod \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\" (UID: \"d51bd7f7-9324-441b-b8f4-1ebde86c404f\") " Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.791199 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51bd7f7-9324-441b-b8f4-1ebde86c404f-kube-api-access-k82j8" (OuterVolumeSpecName: "kube-api-access-k82j8") pod "d51bd7f7-9324-441b-b8f4-1ebde86c404f" (UID: "d51bd7f7-9324-441b-b8f4-1ebde86c404f"). InnerVolumeSpecName "kube-api-access-k82j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.796963 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d51bd7f7-9324-441b-b8f4-1ebde86c404f" (UID: "d51bd7f7-9324-441b-b8f4-1ebde86c404f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.817755 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d51bd7f7-9324-441b-b8f4-1ebde86c404f" (UID: "d51bd7f7-9324-441b-b8f4-1ebde86c404f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.826253 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d51bd7f7-9324-441b-b8f4-1ebde86c404f" (UID: "d51bd7f7-9324-441b-b8f4-1ebde86c404f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.838366 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-inventory" (OuterVolumeSpecName: "inventory") pod "d51bd7f7-9324-441b-b8f4-1ebde86c404f" (UID: "d51bd7f7-9324-441b-b8f4-1ebde86c404f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.889138 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.889178 4787 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.889190 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.889200 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k82j8\" (UniqueName: \"kubernetes.io/projected/d51bd7f7-9324-441b-b8f4-1ebde86c404f-kube-api-access-k82j8\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:07 crc kubenswrapper[4787]: I0219 20:02:07.889211 4787 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51bd7f7-9324-441b-b8f4-1ebde86c404f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.200709 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" event={"ID":"d51bd7f7-9324-441b-b8f4-1ebde86c404f","Type":"ContainerDied","Data":"47f3a24144992478ff0b36d979d40831466f7ec45f7220a5aa98b9bd9177a8c1"} Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.200748 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47f3a24144992478ff0b36d979d40831466f7ec45f7220a5aa98b9bd9177a8c1" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.200815 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-l8q26" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.347813 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq"] Feb 19 20:02:08 crc kubenswrapper[4787]: E0219 20:02:08.348794 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51bd7f7-9324-441b-b8f4-1ebde86c404f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.348817 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51bd7f7-9324-441b-b8f4-1ebde86c404f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 20:02:08 crc kubenswrapper[4787]: E0219 20:02:08.348857 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468d7d08-0fac-4e00-a1c0-c244a2b39aee" containerName="keystone-cron" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.348864 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="468d7d08-0fac-4e00-a1c0-c244a2b39aee" containerName="keystone-cron" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.349414 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="468d7d08-0fac-4e00-a1c0-c244a2b39aee" containerName="keystone-cron" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.349456 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51bd7f7-9324-441b-b8f4-1ebde86c404f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.350588 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.353482 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.356199 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.356293 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.356955 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.357494 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.357671 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.357931 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.410093 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq"] Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507534 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507597 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507662 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507736 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp62r\" (UniqueName: \"kubernetes.io/projected/044bb1cd-0401-4d14-9fbe-10160ee01243-kube-api-access-bp62r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507754 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507855 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507884 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507930 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507975 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.507998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610319 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610375 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610401 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610453 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610483 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610540 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610836 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610904 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp62r\" (UniqueName: \"kubernetes.io/projected/044bb1cd-0401-4d14-9fbe-10160ee01243-kube-api-access-bp62r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.610930 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.611105 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.611158 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.612499 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.615064 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.615449 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.616097 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.616277 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.617625 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.620656 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.620820 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.626335 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.631589 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.632425 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp62r\" (UniqueName: \"kubernetes.io/projected/044bb1cd-0401-4d14-9fbe-10160ee01243-kube-api-access-bp62r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-frmbq\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:08 crc kubenswrapper[4787]: I0219 20:02:08.681149 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:02:09 crc kubenswrapper[4787]: I0219 20:02:09.265505 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq"] Feb 19 20:02:09 crc kubenswrapper[4787]: W0219 20:02:09.266879 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044bb1cd_0401_4d14_9fbe_10160ee01243.slice/crio-238eb43460283b32560c232fd54c0a84da94b379dd03c7a38a286f595f5ecf3b WatchSource:0}: Error finding container 238eb43460283b32560c232fd54c0a84da94b379dd03c7a38a286f595f5ecf3b: Status 404 returned error can't find the container with id 238eb43460283b32560c232fd54c0a84da94b379dd03c7a38a286f595f5ecf3b Feb 19 20:02:09 crc kubenswrapper[4787]: I0219 20:02:09.269474 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:02:10 crc kubenswrapper[4787]: I0219 20:02:10.221633 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" event={"ID":"044bb1cd-0401-4d14-9fbe-10160ee01243","Type":"ContainerStarted","Data":"5f2f9710d24d1dd2913f306babd651d79cb1e26f89b157eb682db625467689f9"} Feb 19 20:02:10 crc kubenswrapper[4787]: I0219 20:02:10.222019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" event={"ID":"044bb1cd-0401-4d14-9fbe-10160ee01243","Type":"ContainerStarted","Data":"238eb43460283b32560c232fd54c0a84da94b379dd03c7a38a286f595f5ecf3b"} Feb 19 20:02:10 crc kubenswrapper[4787]: I0219 20:02:10.247220 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" podStartSLOduration=1.845046257 podStartE2EDuration="2.247201583s" podCreationTimestamp="2026-02-19 20:02:08 +0000 UTC" firstStartedPulling="2026-02-19 20:02:09.269291544 +0000 UTC m=+2597.059957486" lastFinishedPulling="2026-02-19 20:02:09.67144687 +0000 UTC m=+2597.462112812" observedRunningTime="2026-02-19 20:02:10.24251656 +0000 UTC m=+2598.033182502" watchObservedRunningTime="2026-02-19 20:02:10.247201583 +0000 UTC m=+2598.037867525" Feb 19 20:02:39 crc kubenswrapper[4787]: I0219 20:02:39.263661 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:02:39 crc kubenswrapper[4787]: I0219 20:02:39.264750 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:03:09 crc kubenswrapper[4787]: I0219 20:03:09.263840 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:03:09 crc kubenswrapper[4787]: I0219 20:03:09.264421 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:03:39 crc kubenswrapper[4787]: I0219 20:03:39.263754 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:03:39 crc kubenswrapper[4787]: I0219 20:03:39.264418 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:03:39 crc kubenswrapper[4787]: I0219 20:03:39.264467 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:03:39 crc kubenswrapper[4787]: I0219 20:03:39.265952 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72a8a65f261025714c105ad6cfbe05c8731adf0dbe3eb30d70b836574416bb23"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:03:39 crc kubenswrapper[4787]: I0219 20:03:39.266124 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://72a8a65f261025714c105ad6cfbe05c8731adf0dbe3eb30d70b836574416bb23" gracePeriod=600 Feb 19 20:03:40 crc kubenswrapper[4787]: I0219 20:03:40.251213 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="72a8a65f261025714c105ad6cfbe05c8731adf0dbe3eb30d70b836574416bb23" exitCode=0 Feb 19 20:03:40 crc kubenswrapper[4787]: I0219 20:03:40.251390 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"72a8a65f261025714c105ad6cfbe05c8731adf0dbe3eb30d70b836574416bb23"} Feb 19 20:03:40 crc kubenswrapper[4787]: I0219 20:03:40.251508 4787 scope.go:117] "RemoveContainer" containerID="032f01f2fac09d06d5510f6f89ba3a992da4198198cfebbf348247a1830e310a" Feb 19 20:03:41 crc kubenswrapper[4787]: I0219 20:03:41.265639 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80"} Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.763930 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hq7nq"] Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.768768 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.779106 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq7nq"] Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.893435 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89l85\" (UniqueName: \"kubernetes.io/projected/b3a70288-49f1-422c-88b1-cc6e6131f963-kube-api-access-89l85\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.893752 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-catalog-content\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.893859 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-utilities\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.996657 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89l85\" (UniqueName: \"kubernetes.io/projected/b3a70288-49f1-422c-88b1-cc6e6131f963-kube-api-access-89l85\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.996732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-catalog-content\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.996787 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-utilities\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.997451 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-catalog-content\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:50 crc kubenswrapper[4787]: I0219 20:03:50.997749 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-utilities\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:51 crc kubenswrapper[4787]: I0219 20:03:51.021106 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89l85\" (UniqueName: \"kubernetes.io/projected/b3a70288-49f1-422c-88b1-cc6e6131f963-kube-api-access-89l85\") pod \"certified-operators-hq7nq\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:51 crc kubenswrapper[4787]: I0219 20:03:51.092573 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:03:51 crc kubenswrapper[4787]: I0219 20:03:51.650243 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq7nq"] Feb 19 20:03:52 crc kubenswrapper[4787]: I0219 20:03:52.395489 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerStarted","Data":"2c2c535f9bf741ce7d2f8b869d740035e5e8c7b34246044b260a30b216c9a227"} Feb 19 20:03:53 crc kubenswrapper[4787]: I0219 20:03:53.405455 4787 generic.go:334] "Generic (PLEG): container finished" podID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerID="f240314e2cf3af2eb019263e4640e438762122dd3133e2b52a5282975e6c47b4" exitCode=0 Feb 19 20:03:53 crc kubenswrapper[4787]: I0219 20:03:53.405553 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerDied","Data":"f240314e2cf3af2eb019263e4640e438762122dd3133e2b52a5282975e6c47b4"} Feb 19 20:03:55 crc kubenswrapper[4787]: I0219 20:03:55.424892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerStarted","Data":"f805d199132a00e40e1034cb03ccd375fab01b04e5af7965cad96a3296f63609"} Feb 19 20:04:03 crc kubenswrapper[4787]: I0219 20:04:03.508036 4787 generic.go:334] "Generic (PLEG): container finished" podID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerID="f805d199132a00e40e1034cb03ccd375fab01b04e5af7965cad96a3296f63609" exitCode=0 Feb 19 20:04:03 crc kubenswrapper[4787]: I0219 20:04:03.508119 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerDied","Data":"f805d199132a00e40e1034cb03ccd375fab01b04e5af7965cad96a3296f63609"} Feb 19 20:04:06 crc kubenswrapper[4787]: I0219 20:04:06.548070 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerStarted","Data":"17cadc7275942cc00a451e2e6f261ae3a7417f3788d7bb6462a648f37e1c3efd"} Feb 19 20:04:06 crc kubenswrapper[4787]: I0219 20:04:06.572160 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hq7nq" podStartSLOduration=4.503487541 podStartE2EDuration="16.572144666s" podCreationTimestamp="2026-02-19 20:03:50 +0000 UTC" firstStartedPulling="2026-02-19 20:03:53.407858213 +0000 UTC m=+2701.198524185" lastFinishedPulling="2026-02-19 20:04:05.476515368 +0000 UTC m=+2713.267181310" observedRunningTime="2026-02-19 20:04:06.567943636 +0000 UTC m=+2714.358609578" watchObservedRunningTime="2026-02-19 20:04:06.572144666 +0000 UTC m=+2714.362810608" Feb 19 20:04:11 crc kubenswrapper[4787]: I0219 20:04:11.093377 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:04:11 crc kubenswrapper[4787]: I0219 20:04:11.094805 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:04:11 crc kubenswrapper[4787]: I0219 20:04:11.139058 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:04:11 crc kubenswrapper[4787]: I0219 20:04:11.653896 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:04:11 crc kubenswrapper[4787]: I0219 20:04:11.707312 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq7nq"] Feb 19 20:04:13 crc kubenswrapper[4787]: I0219 20:04:13.616496 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hq7nq" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="registry-server" containerID="cri-o://17cadc7275942cc00a451e2e6f261ae3a7417f3788d7bb6462a648f37e1c3efd" gracePeriod=2 Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.633137 4787 generic.go:334] "Generic (PLEG): container finished" podID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerID="17cadc7275942cc00a451e2e6f261ae3a7417f3788d7bb6462a648f37e1c3efd" exitCode=0 Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.633217 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerDied","Data":"17cadc7275942cc00a451e2e6f261ae3a7417f3788d7bb6462a648f37e1c3efd"} Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.890190 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.985418 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89l85\" (UniqueName: \"kubernetes.io/projected/b3a70288-49f1-422c-88b1-cc6e6131f963-kube-api-access-89l85\") pod \"b3a70288-49f1-422c-88b1-cc6e6131f963\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.985684 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-catalog-content\") pod \"b3a70288-49f1-422c-88b1-cc6e6131f963\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.985770 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-utilities\") pod \"b3a70288-49f1-422c-88b1-cc6e6131f963\" (UID: \"b3a70288-49f1-422c-88b1-cc6e6131f963\") " Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.987227 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-utilities" (OuterVolumeSpecName: "utilities") pod "b3a70288-49f1-422c-88b1-cc6e6131f963" (UID: "b3a70288-49f1-422c-88b1-cc6e6131f963"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:04:14 crc kubenswrapper[4787]: I0219 20:04:14.993925 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a70288-49f1-422c-88b1-cc6e6131f963-kube-api-access-89l85" (OuterVolumeSpecName: "kube-api-access-89l85") pod "b3a70288-49f1-422c-88b1-cc6e6131f963" (UID: "b3a70288-49f1-422c-88b1-cc6e6131f963"). InnerVolumeSpecName "kube-api-access-89l85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.057101 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3a70288-49f1-422c-88b1-cc6e6131f963" (UID: "b3a70288-49f1-422c-88b1-cc6e6131f963"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.089762 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.089828 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a70288-49f1-422c-88b1-cc6e6131f963-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.089843 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89l85\" (UniqueName: \"kubernetes.io/projected/b3a70288-49f1-422c-88b1-cc6e6131f963-kube-api-access-89l85\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.647593 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq7nq" event={"ID":"b3a70288-49f1-422c-88b1-cc6e6131f963","Type":"ContainerDied","Data":"2c2c535f9bf741ce7d2f8b869d740035e5e8c7b34246044b260a30b216c9a227"} Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.648729 4787 scope.go:117] "RemoveContainer" containerID="17cadc7275942cc00a451e2e6f261ae3a7417f3788d7bb6462a648f37e1c3efd" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.647910 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq7nq" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.673504 4787 scope.go:117] "RemoveContainer" containerID="f805d199132a00e40e1034cb03ccd375fab01b04e5af7965cad96a3296f63609" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.696814 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq7nq"] Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.718209 4787 scope.go:117] "RemoveContainer" containerID="f240314e2cf3af2eb019263e4640e438762122dd3133e2b52a5282975e6c47b4" Feb 19 20:04:15 crc kubenswrapper[4787]: I0219 20:04:15.719270 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hq7nq"] Feb 19 20:04:16 crc kubenswrapper[4787]: I0219 20:04:16.910084 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" path="/var/lib/kubelet/pods/b3a70288-49f1-422c-88b1-cc6e6131f963/volumes" Feb 19 20:04:23 crc kubenswrapper[4787]: I0219 20:04:23.723429 4787 generic.go:334] "Generic (PLEG): container finished" podID="044bb1cd-0401-4d14-9fbe-10160ee01243" containerID="5f2f9710d24d1dd2913f306babd651d79cb1e26f89b157eb682db625467689f9" exitCode=0 Feb 19 20:04:23 crc kubenswrapper[4787]: I0219 20:04:23.723526 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" event={"ID":"044bb1cd-0401-4d14-9fbe-10160ee01243","Type":"ContainerDied","Data":"5f2f9710d24d1dd2913f306babd651d79cb1e26f89b157eb682db625467689f9"} Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.254929 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364628 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-2\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364751 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-1\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364776 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-3\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364800 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-1\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364851 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-inventory\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364879 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp62r\" (UniqueName: \"kubernetes.io/projected/044bb1cd-0401-4d14-9fbe-10160ee01243-kube-api-access-bp62r\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.364997 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-0\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.365020 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-combined-ca-bundle\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.365045 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-ssh-key-openstack-edpm-ipam\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.365082 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-extra-config-0\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.365112 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-0\") pod \"044bb1cd-0401-4d14-9fbe-10160ee01243\" (UID: \"044bb1cd-0401-4d14-9fbe-10160ee01243\") " Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.386997 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.389405 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044bb1cd-0401-4d14-9fbe-10160ee01243-kube-api-access-bp62r" (OuterVolumeSpecName: "kube-api-access-bp62r") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "kube-api-access-bp62r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.402009 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.402328 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.405561 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.406121 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-inventory" (OuterVolumeSpecName: "inventory") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.410059 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.411865 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.413850 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.428734 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.435782 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "044bb1cd-0401-4d14-9fbe-10160ee01243" (UID: "044bb1cd-0401-4d14-9fbe-10160ee01243"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469123 4787 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469161 4787 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469175 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469187 4787 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469383 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469394 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469406 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469418 4787 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469429 4787 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469441 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/044bb1cd-0401-4d14-9fbe-10160ee01243-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.469455 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp62r\" (UniqueName: \"kubernetes.io/projected/044bb1cd-0401-4d14-9fbe-10160ee01243-kube-api-access-bp62r\") on node \"crc\" DevicePath \"\"" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.746046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" event={"ID":"044bb1cd-0401-4d14-9fbe-10160ee01243","Type":"ContainerDied","Data":"238eb43460283b32560c232fd54c0a84da94b379dd03c7a38a286f595f5ecf3b"} Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.746093 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="238eb43460283b32560c232fd54c0a84da94b379dd03c7a38a286f595f5ecf3b" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.746095 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-frmbq" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.854545 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n"] Feb 19 20:04:25 crc kubenswrapper[4787]: E0219 20:04:25.855363 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="extract-utilities" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.855383 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="extract-utilities" Feb 19 20:04:25 crc kubenswrapper[4787]: E0219 20:04:25.855395 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="extract-content" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.855402 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="extract-content" Feb 19 20:04:25 crc kubenswrapper[4787]: E0219 20:04:25.855430 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="registry-server" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.855436 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="registry-server" Feb 19 20:04:25 crc kubenswrapper[4787]: E0219 20:04:25.855454 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044bb1cd-0401-4d14-9fbe-10160ee01243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.855459 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="044bb1cd-0401-4d14-9fbe-10160ee01243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.855761 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a70288-49f1-422c-88b1-cc6e6131f963" containerName="registry-server" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.855808 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="044bb1cd-0401-4d14-9fbe-10160ee01243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.856961 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.859141 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.859303 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.859340 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.859506 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.860863 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.869108 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n"] Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881398 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881484 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881533 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881561 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgx2m\" (UniqueName: \"kubernetes.io/projected/493db6ac-9a60-4a9a-9cea-e99c4580569e-kube-api-access-rgx2m\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881587 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881700 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.881731 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984389 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984518 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984589 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984706 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgx2m\" (UniqueName: \"kubernetes.io/projected/493db6ac-9a60-4a9a-9cea-e99c4580569e-kube-api-access-rgx2m\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984740 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984826 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.984880 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.989930 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.989989 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.990518 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.993451 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:25 crc kubenswrapper[4787]: I0219 20:04:25.998522 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:26 crc kubenswrapper[4787]: I0219 20:04:26.000068 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:26 crc kubenswrapper[4787]: I0219 20:04:26.016187 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgx2m\" (UniqueName: \"kubernetes.io/projected/493db6ac-9a60-4a9a-9cea-e99c4580569e-kube-api-access-rgx2m\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-42v9n\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:26 crc kubenswrapper[4787]: I0219 20:04:26.211779 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:04:26 crc kubenswrapper[4787]: I0219 20:04:26.770745 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n"] Feb 19 20:04:27 crc kubenswrapper[4787]: I0219 20:04:27.772641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" event={"ID":"493db6ac-9a60-4a9a-9cea-e99c4580569e","Type":"ContainerStarted","Data":"0e70c56875430f8020ed9845962e5a43971faff015a587b1a6ccad8c2d513863"} Feb 19 20:04:27 crc kubenswrapper[4787]: I0219 20:04:27.772987 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" event={"ID":"493db6ac-9a60-4a9a-9cea-e99c4580569e","Type":"ContainerStarted","Data":"effb16fb6bb8325e487374aba489caaa92ead553f1a72b2216b24fcce5e05a80"} Feb 19 20:04:27 crc kubenswrapper[4787]: I0219 20:04:27.806145 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" podStartSLOduration=2.401020211 podStartE2EDuration="2.806126692s" podCreationTimestamp="2026-02-19 20:04:25 +0000 UTC" firstStartedPulling="2026-02-19 20:04:26.771682154 +0000 UTC m=+2734.562348096" lastFinishedPulling="2026-02-19 20:04:27.176788635 +0000 UTC m=+2734.967454577" observedRunningTime="2026-02-19 20:04:27.798733882 +0000 UTC m=+2735.589399824" watchObservedRunningTime="2026-02-19 20:04:27.806126692 +0000 UTC m=+2735.596792634" Feb 19 20:06:09 crc kubenswrapper[4787]: I0219 20:06:09.263566 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:06:09 crc kubenswrapper[4787]: I0219 20:06:09.264142 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:06:38 crc kubenswrapper[4787]: I0219 20:06:38.094995 4787 generic.go:334] "Generic (PLEG): container finished" podID="493db6ac-9a60-4a9a-9cea-e99c4580569e" containerID="0e70c56875430f8020ed9845962e5a43971faff015a587b1a6ccad8c2d513863" exitCode=0 Feb 19 20:06:38 crc kubenswrapper[4787]: I0219 20:06:38.095089 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" event={"ID":"493db6ac-9a60-4a9a-9cea-e99c4580569e","Type":"ContainerDied","Data":"0e70c56875430f8020ed9845962e5a43971faff015a587b1a6ccad8c2d513863"} Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.263292 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.263662 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.615990 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.782911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-inventory\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.783347 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgx2m\" (UniqueName: \"kubernetes.io/projected/493db6ac-9a60-4a9a-9cea-e99c4580569e-kube-api-access-rgx2m\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.783371 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ssh-key-openstack-edpm-ipam\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.783403 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-0\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.783469 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-2\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.783548 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-1\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.783581 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-telemetry-combined-ca-bundle\") pod \"493db6ac-9a60-4a9a-9cea-e99c4580569e\" (UID: \"493db6ac-9a60-4a9a-9cea-e99c4580569e\") " Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.788824 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493db6ac-9a60-4a9a-9cea-e99c4580569e-kube-api-access-rgx2m" (OuterVolumeSpecName: "kube-api-access-rgx2m") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "kube-api-access-rgx2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.790761 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.820008 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.823879 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.825995 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.829076 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-inventory" (OuterVolumeSpecName: "inventory") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.832323 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "493db6ac-9a60-4a9a-9cea-e99c4580569e" (UID: "493db6ac-9a60-4a9a-9cea-e99c4580569e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886894 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgx2m\" (UniqueName: \"kubernetes.io/projected/493db6ac-9a60-4a9a-9cea-e99c4580569e-kube-api-access-rgx2m\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886930 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886940 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886950 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886959 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886981 4787 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:39 crc kubenswrapper[4787]: I0219 20:06:39.886994 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/493db6ac-9a60-4a9a-9cea-e99c4580569e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.117005 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" event={"ID":"493db6ac-9a60-4a9a-9cea-e99c4580569e","Type":"ContainerDied","Data":"effb16fb6bb8325e487374aba489caaa92ead553f1a72b2216b24fcce5e05a80"} Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.117056 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-42v9n" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.117072 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="effb16fb6bb8325e487374aba489caaa92ead553f1a72b2216b24fcce5e05a80" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.206801 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp"] Feb 19 20:06:40 crc kubenswrapper[4787]: E0219 20:06:40.207435 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493db6ac-9a60-4a9a-9cea-e99c4580569e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.207454 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="493db6ac-9a60-4a9a-9cea-e99c4580569e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.207784 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="493db6ac-9a60-4a9a-9cea-e99c4580569e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.209181 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.211659 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.211862 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.211925 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.212226 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.212382 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.225302 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp"] Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.297155 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvmv\" (UniqueName: \"kubernetes.io/projected/c8b17184-10d3-4bed-a849-e9b38351d827-kube-api-access-dkvmv\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.297974 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.298163 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.298292 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.298347 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.298495 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.298571 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.400895 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvmv\" (UniqueName: \"kubernetes.io/projected/c8b17184-10d3-4bed-a849-e9b38351d827-kube-api-access-dkvmv\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.401213 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.401364 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.401504 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.401600 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.401763 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.401870 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.406547 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.406594 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.407275 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.407750 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.408163 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.408305 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.417787 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvmv\" (UniqueName: \"kubernetes.io/projected/c8b17184-10d3-4bed-a849-e9b38351d827-kube-api-access-dkvmv\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:40 crc kubenswrapper[4787]: I0219 20:06:40.528881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:06:41 crc kubenswrapper[4787]: I0219 20:06:41.101602 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp"] Feb 19 20:06:41 crc kubenswrapper[4787]: I0219 20:06:41.129331 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" event={"ID":"c8b17184-10d3-4bed-a849-e9b38351d827","Type":"ContainerStarted","Data":"ee35987e2a9ed144a2ba7de6589e4a63aed5ff6572fd618dfa0234e4b138a3fd"} Feb 19 20:06:42 crc kubenswrapper[4787]: I0219 20:06:42.148163 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" event={"ID":"c8b17184-10d3-4bed-a849-e9b38351d827","Type":"ContainerStarted","Data":"23904f8fe3d35e75058a61610b97455d6c682d4d3210b0ce541ae787ee2e2415"} Feb 19 20:06:42 crc kubenswrapper[4787]: I0219 20:06:42.167957 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" podStartSLOduration=1.732651782 podStartE2EDuration="2.167940551s" podCreationTimestamp="2026-02-19 20:06:40 +0000 UTC" firstStartedPulling="2026-02-19 20:06:41.104489429 +0000 UTC m=+2868.895155381" lastFinishedPulling="2026-02-19 20:06:41.539778208 +0000 UTC m=+2869.330444150" observedRunningTime="2026-02-19 20:06:42.167172099 +0000 UTC m=+2869.957838041" watchObservedRunningTime="2026-02-19 20:06:42.167940551 +0000 UTC m=+2869.958606493" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.665223 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wqwr"] Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.670158 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.680123 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wqwr"] Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.746249 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-catalog-content\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.746651 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmsll\" (UniqueName: \"kubernetes.io/projected/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-kube-api-access-mmsll\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.746906 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-utilities\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.849406 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-catalog-content\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.849512 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmsll\" (UniqueName: \"kubernetes.io/projected/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-kube-api-access-mmsll\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.849690 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-utilities\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.850004 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-catalog-content\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.850239 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-utilities\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:49 crc kubenswrapper[4787]: I0219 20:06:49.871987 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmsll\" (UniqueName: \"kubernetes.io/projected/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-kube-api-access-mmsll\") pod \"redhat-marketplace-5wqwr\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:50 crc kubenswrapper[4787]: I0219 20:06:50.007635 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:06:50 crc kubenswrapper[4787]: I0219 20:06:50.519665 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wqwr"] Feb 19 20:06:51 crc kubenswrapper[4787]: I0219 20:06:51.254049 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerID="3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35" exitCode=0 Feb 19 20:06:51 crc kubenswrapper[4787]: I0219 20:06:51.254154 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerDied","Data":"3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35"} Feb 19 20:06:51 crc kubenswrapper[4787]: I0219 20:06:51.254349 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerStarted","Data":"99cee8b8bbd2ad5ca99fa447eef60647f4979cdb4a08f8c3cb69c5a4b46fbaa2"} Feb 19 20:06:52 crc kubenswrapper[4787]: I0219 20:06:52.266704 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerStarted","Data":"a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8"} Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.277557 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerID="a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8" exitCode=0 Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.277643 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerDied","Data":"a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8"} Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.466235 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b5q5n"] Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.468799 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.479382 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5q5n"] Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.643972 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkphd\" (UniqueName: \"kubernetes.io/projected/e43ce3ff-ef19-4c64-807c-cc316751b9f7-kube-api-access-fkphd\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.644040 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-catalog-content\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.644266 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-utilities\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.746617 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-utilities\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.746819 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkphd\" (UniqueName: \"kubernetes.io/projected/e43ce3ff-ef19-4c64-807c-cc316751b9f7-kube-api-access-fkphd\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.746868 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-catalog-content\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.747084 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-utilities\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.747341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-catalog-content\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.767475 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkphd\" (UniqueName: \"kubernetes.io/projected/e43ce3ff-ef19-4c64-807c-cc316751b9f7-kube-api-access-fkphd\") pod \"redhat-operators-b5q5n\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:53 crc kubenswrapper[4787]: I0219 20:06:53.812262 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:06:54 crc kubenswrapper[4787]: I0219 20:06:54.292097 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerStarted","Data":"fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d"} Feb 19 20:06:54 crc kubenswrapper[4787]: I0219 20:06:54.318180 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wqwr" podStartSLOduration=2.910711043 podStartE2EDuration="5.318158405s" podCreationTimestamp="2026-02-19 20:06:49 +0000 UTC" firstStartedPulling="2026-02-19 20:06:51.258581258 +0000 UTC m=+2879.049247200" lastFinishedPulling="2026-02-19 20:06:53.66602862 +0000 UTC m=+2881.456694562" observedRunningTime="2026-02-19 20:06:54.307227264 +0000 UTC m=+2882.097893206" watchObservedRunningTime="2026-02-19 20:06:54.318158405 +0000 UTC m=+2882.108824347" Feb 19 20:06:54 crc kubenswrapper[4787]: I0219 20:06:54.358517 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b5q5n"] Feb 19 20:06:55 crc kubenswrapper[4787]: I0219 20:06:55.301660 4787 generic.go:334] "Generic (PLEG): container finished" podID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerID="88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c" exitCode=0 Feb 19 20:06:55 crc kubenswrapper[4787]: I0219 20:06:55.303591 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerDied","Data":"88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c"} Feb 19 20:06:55 crc kubenswrapper[4787]: I0219 20:06:55.303636 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerStarted","Data":"919faff34d734d497649b3907eed56fe26829f94ea0575d8b715d7a59e9208b4"} Feb 19 20:06:56 crc kubenswrapper[4787]: I0219 20:06:56.313807 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerStarted","Data":"860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461"} Feb 19 20:07:00 crc kubenswrapper[4787]: I0219 20:07:00.008162 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:07:00 crc kubenswrapper[4787]: I0219 20:07:00.008779 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:07:00 crc kubenswrapper[4787]: I0219 20:07:00.069248 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:07:00 crc kubenswrapper[4787]: I0219 20:07:00.402351 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:07:01 crc kubenswrapper[4787]: I0219 20:07:01.274528 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wqwr"] Feb 19 20:07:01 crc kubenswrapper[4787]: I0219 20:07:01.365192 4787 generic.go:334] "Generic (PLEG): container finished" podID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerID="860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461" exitCode=0 Feb 19 20:07:01 crc kubenswrapper[4787]: I0219 20:07:01.365285 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerDied","Data":"860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461"} Feb 19 20:07:02 crc kubenswrapper[4787]: I0219 20:07:02.382806 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wqwr" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="registry-server" containerID="cri-o://fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d" gracePeriod=2 Feb 19 20:07:02 crc kubenswrapper[4787]: I0219 20:07:02.383297 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerStarted","Data":"51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126"} Feb 19 20:07:02 crc kubenswrapper[4787]: I0219 20:07:02.415221 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b5q5n" podStartSLOduration=2.954824848 podStartE2EDuration="9.415199496s" podCreationTimestamp="2026-02-19 20:06:53 +0000 UTC" firstStartedPulling="2026-02-19 20:06:55.304734711 +0000 UTC m=+2883.095400653" lastFinishedPulling="2026-02-19 20:07:01.765109359 +0000 UTC m=+2889.555775301" observedRunningTime="2026-02-19 20:07:02.413455847 +0000 UTC m=+2890.204121789" watchObservedRunningTime="2026-02-19 20:07:02.415199496 +0000 UTC m=+2890.205865438" Feb 19 20:07:02 crc kubenswrapper[4787]: I0219 20:07:02.964569 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.104083 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-catalog-content\") pod \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.104596 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmsll\" (UniqueName: \"kubernetes.io/projected/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-kube-api-access-mmsll\") pod \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.104737 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-utilities\") pod \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\" (UID: \"9ef57afe-79dc-4185-8fd0-59a7ba1f9551\") " Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.105559 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-utilities" (OuterVolumeSpecName: "utilities") pod "9ef57afe-79dc-4185-8fd0-59a7ba1f9551" (UID: "9ef57afe-79dc-4185-8fd0-59a7ba1f9551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.105952 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.114106 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-kube-api-access-mmsll" (OuterVolumeSpecName: "kube-api-access-mmsll") pod "9ef57afe-79dc-4185-8fd0-59a7ba1f9551" (UID: "9ef57afe-79dc-4185-8fd0-59a7ba1f9551"). InnerVolumeSpecName "kube-api-access-mmsll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.137428 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ef57afe-79dc-4185-8fd0-59a7ba1f9551" (UID: "9ef57afe-79dc-4185-8fd0-59a7ba1f9551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.210084 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.210315 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmsll\" (UniqueName: \"kubernetes.io/projected/9ef57afe-79dc-4185-8fd0-59a7ba1f9551-kube-api-access-mmsll\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.393602 4787 generic.go:334] "Generic (PLEG): container finished" podID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerID="fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d" exitCode=0 Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.393691 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerDied","Data":"fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d"} Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.393728 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wqwr" event={"ID":"9ef57afe-79dc-4185-8fd0-59a7ba1f9551","Type":"ContainerDied","Data":"99cee8b8bbd2ad5ca99fa447eef60647f4979cdb4a08f8c3cb69c5a4b46fbaa2"} Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.393750 4787 scope.go:117] "RemoveContainer" containerID="fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.393906 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wqwr" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.440677 4787 scope.go:117] "RemoveContainer" containerID="a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.449717 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wqwr"] Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.468550 4787 scope.go:117] "RemoveContainer" containerID="3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.487764 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wqwr"] Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.543382 4787 scope.go:117] "RemoveContainer" containerID="fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d" Feb 19 20:07:03 crc kubenswrapper[4787]: E0219 20:07:03.543862 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d\": container with ID starting with fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d not found: ID does not exist" containerID="fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.543899 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d"} err="failed to get container status \"fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d\": rpc error: code = NotFound desc = could not find container \"fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d\": container with ID starting with fe4ae32d0d111fccf6862041e508857f412c0dd9cffd9b95606400d37e548d3d not found: ID does not exist" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.543922 4787 scope.go:117] "RemoveContainer" containerID="a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8" Feb 19 20:07:03 crc kubenswrapper[4787]: E0219 20:07:03.544608 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8\": container with ID starting with a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8 not found: ID does not exist" containerID="a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.544664 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8"} err="failed to get container status \"a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8\": rpc error: code = NotFound desc = could not find container \"a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8\": container with ID starting with a16e1afb18f941859a31ba565e6e837d3a93ee581d3a3428bdbbac6d5ef458b8 not found: ID does not exist" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.544687 4787 scope.go:117] "RemoveContainer" containerID="3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35" Feb 19 20:07:03 crc kubenswrapper[4787]: E0219 20:07:03.545216 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35\": container with ID starting with 3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35 not found: ID does not exist" containerID="3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.545243 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35"} err="failed to get container status \"3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35\": rpc error: code = NotFound desc = could not find container \"3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35\": container with ID starting with 3f672f5b2b2c1820ff908211459615e9fc6509f97d57c9082a57e3cfc3587d35 not found: ID does not exist" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.812912 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:07:03 crc kubenswrapper[4787]: I0219 20:07:03.812957 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:07:04 crc kubenswrapper[4787]: I0219 20:07:04.863532 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b5q5n" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="registry-server" probeResult="failure" output=< Feb 19 20:07:04 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:07:04 crc kubenswrapper[4787]: > Feb 19 20:07:04 crc kubenswrapper[4787]: I0219 20:07:04.906439 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" path="/var/lib/kubelet/pods/9ef57afe-79dc-4185-8fd0-59a7ba1f9551/volumes" Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.263531 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.264147 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.264193 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.265053 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.265111 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" gracePeriod=600 Feb 19 20:07:09 crc kubenswrapper[4787]: E0219 20:07:09.385800 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.457383 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" exitCode=0 Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.457442 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80"} Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.457555 4787 scope.go:117] "RemoveContainer" containerID="72a8a65f261025714c105ad6cfbe05c8731adf0dbe3eb30d70b836574416bb23" Feb 19 20:07:09 crc kubenswrapper[4787]: I0219 20:07:09.458462 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:07:09 crc kubenswrapper[4787]: E0219 20:07:09.459007 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:07:14 crc kubenswrapper[4787]: I0219 20:07:14.859831 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b5q5n" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="registry-server" probeResult="failure" output=< Feb 19 20:07:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:07:14 crc kubenswrapper[4787]: > Feb 19 20:07:20 crc kubenswrapper[4787]: I0219 20:07:20.892869 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:07:20 crc kubenswrapper[4787]: E0219 20:07:20.893886 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:07:23 crc kubenswrapper[4787]: I0219 20:07:23.868249 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:07:23 crc kubenswrapper[4787]: I0219 20:07:23.927882 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:07:24 crc kubenswrapper[4787]: I0219 20:07:24.669530 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5q5n"] Feb 19 20:07:25 crc kubenswrapper[4787]: I0219 20:07:25.647534 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b5q5n" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="registry-server" containerID="cri-o://51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126" gracePeriod=2 Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.152843 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.200225 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-utilities\") pod \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.200359 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkphd\" (UniqueName: \"kubernetes.io/projected/e43ce3ff-ef19-4c64-807c-cc316751b9f7-kube-api-access-fkphd\") pod \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.200480 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-catalog-content\") pod \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\" (UID: \"e43ce3ff-ef19-4c64-807c-cc316751b9f7\") " Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.201264 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-utilities" (OuterVolumeSpecName: "utilities") pod "e43ce3ff-ef19-4c64-807c-cc316751b9f7" (UID: "e43ce3ff-ef19-4c64-807c-cc316751b9f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.207994 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43ce3ff-ef19-4c64-807c-cc316751b9f7-kube-api-access-fkphd" (OuterVolumeSpecName: "kube-api-access-fkphd") pod "e43ce3ff-ef19-4c64-807c-cc316751b9f7" (UID: "e43ce3ff-ef19-4c64-807c-cc316751b9f7"). InnerVolumeSpecName "kube-api-access-fkphd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.303053 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.303089 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkphd\" (UniqueName: \"kubernetes.io/projected/e43ce3ff-ef19-4c64-807c-cc316751b9f7-kube-api-access-fkphd\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.338566 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e43ce3ff-ef19-4c64-807c-cc316751b9f7" (UID: "e43ce3ff-ef19-4c64-807c-cc316751b9f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.405902 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43ce3ff-ef19-4c64-807c-cc316751b9f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.657975 4787 generic.go:334] "Generic (PLEG): container finished" podID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerID="51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126" exitCode=0 Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.658019 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerDied","Data":"51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126"} Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.658050 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b5q5n" event={"ID":"e43ce3ff-ef19-4c64-807c-cc316751b9f7","Type":"ContainerDied","Data":"919faff34d734d497649b3907eed56fe26829f94ea0575d8b715d7a59e9208b4"} Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.658048 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b5q5n" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.658093 4787 scope.go:117] "RemoveContainer" containerID="51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.683982 4787 scope.go:117] "RemoveContainer" containerID="860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.700947 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b5q5n"] Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.717755 4787 scope.go:117] "RemoveContainer" containerID="88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.721719 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b5q5n"] Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.768712 4787 scope.go:117] "RemoveContainer" containerID="51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126" Feb 19 20:07:26 crc kubenswrapper[4787]: E0219 20:07:26.769142 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126\": container with ID starting with 51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126 not found: ID does not exist" containerID="51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.769169 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126"} err="failed to get container status \"51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126\": rpc error: code = NotFound desc = could not find container \"51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126\": container with ID starting with 51ea6347ca31a3a3ce99464b4e9ac068cd65745229f7c498d41605c21ef92126 not found: ID does not exist" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.769189 4787 scope.go:117] "RemoveContainer" containerID="860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461" Feb 19 20:07:26 crc kubenswrapper[4787]: E0219 20:07:26.770199 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461\": container with ID starting with 860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461 not found: ID does not exist" containerID="860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.770226 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461"} err="failed to get container status \"860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461\": rpc error: code = NotFound desc = could not find container \"860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461\": container with ID starting with 860ed3c14d43be64a66e8cd3bfb0f8eb5ac4c1181ec2e4b4f3390eb27eb4e461 not found: ID does not exist" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.770245 4787 scope.go:117] "RemoveContainer" containerID="88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c" Feb 19 20:07:26 crc kubenswrapper[4787]: E0219 20:07:26.770525 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c\": container with ID starting with 88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c not found: ID does not exist" containerID="88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.770549 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c"} err="failed to get container status \"88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c\": rpc error: code = NotFound desc = could not find container \"88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c\": container with ID starting with 88c1c3daba17a1a589302d44baddfc775a79fbf30688742e588cf1ee74004b9c not found: ID does not exist" Feb 19 20:07:26 crc kubenswrapper[4787]: I0219 20:07:26.904702 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" path="/var/lib/kubelet/pods/e43ce3ff-ef19-4c64-807c-cc316751b9f7/volumes" Feb 19 20:07:34 crc kubenswrapper[4787]: I0219 20:07:34.892780 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:07:34 crc kubenswrapper[4787]: E0219 20:07:34.893623 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:07:49 crc kubenswrapper[4787]: I0219 20:07:49.891730 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:07:49 crc kubenswrapper[4787]: E0219 20:07:49.892645 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:08:03 crc kubenswrapper[4787]: I0219 20:08:03.892253 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:08:03 crc kubenswrapper[4787]: E0219 20:08:03.893078 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:08:18 crc kubenswrapper[4787]: I0219 20:08:18.892579 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:08:18 crc kubenswrapper[4787]: E0219 20:08:18.893268 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:08:31 crc kubenswrapper[4787]: I0219 20:08:31.370431 4787 generic.go:334] "Generic (PLEG): container finished" podID="c8b17184-10d3-4bed-a849-e9b38351d827" containerID="23904f8fe3d35e75058a61610b97455d6c682d4d3210b0ce541ae787ee2e2415" exitCode=0 Feb 19 20:08:31 crc kubenswrapper[4787]: I0219 20:08:31.370490 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" event={"ID":"c8b17184-10d3-4bed-a849-e9b38351d827","Type":"ContainerDied","Data":"23904f8fe3d35e75058a61610b97455d6c682d4d3210b0ce541ae787ee2e2415"} Feb 19 20:08:32 crc kubenswrapper[4787]: I0219 20:08:32.929070 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036040 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkvmv\" (UniqueName: \"kubernetes.io/projected/c8b17184-10d3-4bed-a849-e9b38351d827-kube-api-access-dkvmv\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036093 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-2\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036173 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-inventory\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036304 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-telemetry-power-monitoring-combined-ca-bundle\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036450 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-1\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036508 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ssh-key-openstack-edpm-ipam\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.036528 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-0\") pod \"c8b17184-10d3-4bed-a849-e9b38351d827\" (UID: \"c8b17184-10d3-4bed-a849-e9b38351d827\") " Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.042053 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.042077 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b17184-10d3-4bed-a849-e9b38351d827-kube-api-access-dkvmv" (OuterVolumeSpecName: "kube-api-access-dkvmv") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "kube-api-access-dkvmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.066637 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.066986 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.068996 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-inventory" (OuterVolumeSpecName: "inventory") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.071404 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.072388 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "c8b17184-10d3-4bed-a849-e9b38351d827" (UID: "c8b17184-10d3-4bed-a849-e9b38351d827"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.139761 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.140097 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.140108 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.140118 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkvmv\" (UniqueName: \"kubernetes.io/projected/c8b17184-10d3-4bed-a849-e9b38351d827-kube-api-access-dkvmv\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.140129 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.140143 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.140158 4787 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b17184-10d3-4bed-a849-e9b38351d827-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.392301 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" event={"ID":"c8b17184-10d3-4bed-a849-e9b38351d827","Type":"ContainerDied","Data":"ee35987e2a9ed144a2ba7de6589e4a63aed5ff6572fd618dfa0234e4b138a3fd"} Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.392348 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee35987e2a9ed144a2ba7de6589e4a63aed5ff6572fd618dfa0234e4b138a3fd" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.392358 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.494871 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw"] Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495353 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="extract-content" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495371 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="extract-content" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495402 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="extract-utilities" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495410 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="extract-utilities" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495425 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="extract-content" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495431 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="extract-content" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495440 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="registry-server" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495445 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="registry-server" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495456 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="extract-utilities" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495473 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="extract-utilities" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495489 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="registry-server" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495496 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="registry-server" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.495504 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b17184-10d3-4bed-a849-e9b38351d827" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495511 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b17184-10d3-4bed-a849-e9b38351d827" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495719 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef57afe-79dc-4185-8fd0-59a7ba1f9551" containerName="registry-server" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495730 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43ce3ff-ef19-4c64-807c-cc316751b9f7" containerName="registry-server" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.495752 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b17184-10d3-4bed-a849-e9b38351d827" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.496742 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.499311 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.502894 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.503195 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.503367 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.503504 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r2m7s" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.508152 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw"] Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.548804 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.549136 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.549284 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.549394 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-427x5\" (UniqueName: \"kubernetes.io/projected/0e791926-1bff-4cce-9d66-994a91623a18-kube-api-access-427x5\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.549461 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.652085 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.652543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.652770 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.652965 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-427x5\" (UniqueName: \"kubernetes.io/projected/0e791926-1bff-4cce-9d66-994a91623a18-kube-api-access-427x5\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.653163 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.657948 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.658273 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.658727 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.663152 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.670726 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-427x5\" (UniqueName: \"kubernetes.io/projected/0e791926-1bff-4cce-9d66-994a91623a18-kube-api-access-427x5\") pod \"logging-edpm-deployment-openstack-edpm-ipam-99zzw\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.823739 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:33 crc kubenswrapper[4787]: I0219 20:08:33.892570 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:08:33 crc kubenswrapper[4787]: E0219 20:08:33.893116 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:08:34 crc kubenswrapper[4787]: I0219 20:08:34.362995 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw"] Feb 19 20:08:34 crc kubenswrapper[4787]: I0219 20:08:34.366016 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:08:34 crc kubenswrapper[4787]: I0219 20:08:34.408592 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" event={"ID":"0e791926-1bff-4cce-9d66-994a91623a18","Type":"ContainerStarted","Data":"1f93d30e7cb10dcd8862333e0b21f2656ee0ce427127e089e95f4e0d0503da40"} Feb 19 20:08:35 crc kubenswrapper[4787]: I0219 20:08:35.422491 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" event={"ID":"0e791926-1bff-4cce-9d66-994a91623a18","Type":"ContainerStarted","Data":"8d36c88de1a375c5b5972fefb764b571773dc82e1276ea73cf7d613c0b8ff75c"} Feb 19 20:08:35 crc kubenswrapper[4787]: I0219 20:08:35.448065 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" podStartSLOduration=1.921218347 podStartE2EDuration="2.448045999s" podCreationTimestamp="2026-02-19 20:08:33 +0000 UTC" firstStartedPulling="2026-02-19 20:08:34.365820513 +0000 UTC m=+2982.156486455" lastFinishedPulling="2026-02-19 20:08:34.892648165 +0000 UTC m=+2982.683314107" observedRunningTime="2026-02-19 20:08:35.442597214 +0000 UTC m=+2983.233263156" watchObservedRunningTime="2026-02-19 20:08:35.448045999 +0000 UTC m=+2983.238711941" Feb 19 20:08:47 crc kubenswrapper[4787]: I0219 20:08:47.893186 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:08:47 crc kubenswrapper[4787]: E0219 20:08:47.894379 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:08:49 crc kubenswrapper[4787]: I0219 20:08:49.592775 4787 generic.go:334] "Generic (PLEG): container finished" podID="0e791926-1bff-4cce-9d66-994a91623a18" containerID="8d36c88de1a375c5b5972fefb764b571773dc82e1276ea73cf7d613c0b8ff75c" exitCode=0 Feb 19 20:08:49 crc kubenswrapper[4787]: I0219 20:08:49.592847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" event={"ID":"0e791926-1bff-4cce-9d66-994a91623a18","Type":"ContainerDied","Data":"8d36c88de1a375c5b5972fefb764b571773dc82e1276ea73cf7d613c0b8ff75c"} Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.066754 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.203415 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-1\") pod \"0e791926-1bff-4cce-9d66-994a91623a18\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.203638 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-0\") pod \"0e791926-1bff-4cce-9d66-994a91623a18\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.203706 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-427x5\" (UniqueName: \"kubernetes.io/projected/0e791926-1bff-4cce-9d66-994a91623a18-kube-api-access-427x5\") pod \"0e791926-1bff-4cce-9d66-994a91623a18\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.203753 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-ssh-key-openstack-edpm-ipam\") pod \"0e791926-1bff-4cce-9d66-994a91623a18\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.203911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-inventory\") pod \"0e791926-1bff-4cce-9d66-994a91623a18\" (UID: \"0e791926-1bff-4cce-9d66-994a91623a18\") " Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.208947 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e791926-1bff-4cce-9d66-994a91623a18-kube-api-access-427x5" (OuterVolumeSpecName: "kube-api-access-427x5") pod "0e791926-1bff-4cce-9d66-994a91623a18" (UID: "0e791926-1bff-4cce-9d66-994a91623a18"). InnerVolumeSpecName "kube-api-access-427x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.235168 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "0e791926-1bff-4cce-9d66-994a91623a18" (UID: "0e791926-1bff-4cce-9d66-994a91623a18"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.235494 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "0e791926-1bff-4cce-9d66-994a91623a18" (UID: "0e791926-1bff-4cce-9d66-994a91623a18"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.235532 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-inventory" (OuterVolumeSpecName: "inventory") pod "0e791926-1bff-4cce-9d66-994a91623a18" (UID: "0e791926-1bff-4cce-9d66-994a91623a18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.239726 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e791926-1bff-4cce-9d66-994a91623a18" (UID: "0e791926-1bff-4cce-9d66-994a91623a18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.306923 4787 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.306957 4787 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.306968 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-427x5\" (UniqueName: \"kubernetes.io/projected/0e791926-1bff-4cce-9d66-994a91623a18-kube-api-access-427x5\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.306981 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.306991 4787 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e791926-1bff-4cce-9d66-994a91623a18-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.621843 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" event={"ID":"0e791926-1bff-4cce-9d66-994a91623a18","Type":"ContainerDied","Data":"1f93d30e7cb10dcd8862333e0b21f2656ee0ce427127e089e95f4e0d0503da40"} Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.622212 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f93d30e7cb10dcd8862333e0b21f2656ee0ce427127e089e95f4e0d0503da40" Feb 19 20:08:51 crc kubenswrapper[4787]: I0219 20:08:51.622292 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-99zzw" Feb 19 20:09:00 crc kubenswrapper[4787]: I0219 20:09:00.892244 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:09:00 crc kubenswrapper[4787]: E0219 20:09:00.893042 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:09:11 crc kubenswrapper[4787]: I0219 20:09:11.892179 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:09:11 crc kubenswrapper[4787]: E0219 20:09:11.893123 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.050504 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7s5vl"] Feb 19 20:09:21 crc kubenswrapper[4787]: E0219 20:09:21.051822 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e791926-1bff-4cce-9d66-994a91623a18" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.051847 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e791926-1bff-4cce-9d66-994a91623a18" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.052185 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e791926-1bff-4cce-9d66-994a91623a18" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.054565 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.068101 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7s5vl"] Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.089536 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-utilities\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.089672 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw64q\" (UniqueName: \"kubernetes.io/projected/ffcc863a-1983-4e29-8645-fb7faafb63a0-kube-api-access-dw64q\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.089802 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-catalog-content\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.192600 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw64q\" (UniqueName: \"kubernetes.io/projected/ffcc863a-1983-4e29-8645-fb7faafb63a0-kube-api-access-dw64q\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.192876 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-catalog-content\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.193406 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-catalog-content\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.193590 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-utilities\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.193914 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-utilities\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.213694 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw64q\" (UniqueName: \"kubernetes.io/projected/ffcc863a-1983-4e29-8645-fb7faafb63a0-kube-api-access-dw64q\") pod \"community-operators-7s5vl\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.384846 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.958711 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7s5vl"] Feb 19 20:09:21 crc kubenswrapper[4787]: I0219 20:09:21.982892 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerStarted","Data":"ffcb82ee165ce01f769b3de28c7b63197785773f75ddd5861cf131dee8e73141"} Feb 19 20:09:22 crc kubenswrapper[4787]: I0219 20:09:22.995559 4787 generic.go:334] "Generic (PLEG): container finished" podID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerID="6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122" exitCode=0 Feb 19 20:09:22 crc kubenswrapper[4787]: I0219 20:09:22.995641 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerDied","Data":"6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122"} Feb 19 20:09:24 crc kubenswrapper[4787]: I0219 20:09:24.005927 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerStarted","Data":"eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93"} Feb 19 20:09:25 crc kubenswrapper[4787]: I0219 20:09:25.892555 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:09:25 crc kubenswrapper[4787]: E0219 20:09:25.893472 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:09:26 crc kubenswrapper[4787]: I0219 20:09:26.037046 4787 generic.go:334] "Generic (PLEG): container finished" podID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerID="eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93" exitCode=0 Feb 19 20:09:26 crc kubenswrapper[4787]: I0219 20:09:26.037123 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerDied","Data":"eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93"} Feb 19 20:09:28 crc kubenswrapper[4787]: I0219 20:09:28.059813 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerStarted","Data":"ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5"} Feb 19 20:09:28 crc kubenswrapper[4787]: I0219 20:09:28.100104 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7s5vl" podStartSLOduration=2.676549858 podStartE2EDuration="7.100074402s" podCreationTimestamp="2026-02-19 20:09:21 +0000 UTC" firstStartedPulling="2026-02-19 20:09:22.99752936 +0000 UTC m=+3030.788195302" lastFinishedPulling="2026-02-19 20:09:27.421053904 +0000 UTC m=+3035.211719846" observedRunningTime="2026-02-19 20:09:28.091155339 +0000 UTC m=+3035.881821291" watchObservedRunningTime="2026-02-19 20:09:28.100074402 +0000 UTC m=+3035.890740344" Feb 19 20:09:31 crc kubenswrapper[4787]: I0219 20:09:31.385929 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:31 crc kubenswrapper[4787]: I0219 20:09:31.386309 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:31 crc kubenswrapper[4787]: I0219 20:09:31.431594 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:32 crc kubenswrapper[4787]: I0219 20:09:32.138966 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:32 crc kubenswrapper[4787]: I0219 20:09:32.188628 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7s5vl"] Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.111917 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7s5vl" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="registry-server" containerID="cri-o://ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5" gracePeriod=2 Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.629927 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.734702 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-utilities\") pod \"ffcc863a-1983-4e29-8645-fb7faafb63a0\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.735074 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-catalog-content\") pod \"ffcc863a-1983-4e29-8645-fb7faafb63a0\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.735893 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw64q\" (UniqueName: \"kubernetes.io/projected/ffcc863a-1983-4e29-8645-fb7faafb63a0-kube-api-access-dw64q\") pod \"ffcc863a-1983-4e29-8645-fb7faafb63a0\" (UID: \"ffcc863a-1983-4e29-8645-fb7faafb63a0\") " Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.735945 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-utilities" (OuterVolumeSpecName: "utilities") pod "ffcc863a-1983-4e29-8645-fb7faafb63a0" (UID: "ffcc863a-1983-4e29-8645-fb7faafb63a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.737335 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.745968 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcc863a-1983-4e29-8645-fb7faafb63a0-kube-api-access-dw64q" (OuterVolumeSpecName: "kube-api-access-dw64q") pod "ffcc863a-1983-4e29-8645-fb7faafb63a0" (UID: "ffcc863a-1983-4e29-8645-fb7faafb63a0"). InnerVolumeSpecName "kube-api-access-dw64q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:09:34 crc kubenswrapper[4787]: I0219 20:09:34.839245 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw64q\" (UniqueName: \"kubernetes.io/projected/ffcc863a-1983-4e29-8645-fb7faafb63a0-kube-api-access-dw64q\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.122502 4787 generic.go:334] "Generic (PLEG): container finished" podID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerID="ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5" exitCode=0 Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.122692 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s5vl" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.122680 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerDied","Data":"ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5"} Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.123774 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s5vl" event={"ID":"ffcc863a-1983-4e29-8645-fb7faafb63a0","Type":"ContainerDied","Data":"ffcb82ee165ce01f769b3de28c7b63197785773f75ddd5861cf131dee8e73141"} Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.123798 4787 scope.go:117] "RemoveContainer" containerID="ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.146199 4787 scope.go:117] "RemoveContainer" containerID="eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.169049 4787 scope.go:117] "RemoveContainer" containerID="6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.238548 4787 scope.go:117] "RemoveContainer" containerID="ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5" Feb 19 20:09:35 crc kubenswrapper[4787]: E0219 20:09:35.238924 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5\": container with ID starting with ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5 not found: ID does not exist" containerID="ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.238955 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5"} err="failed to get container status \"ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5\": rpc error: code = NotFound desc = could not find container \"ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5\": container with ID starting with ee1cd81d9d96368773b33ea8222929973675c830759a52726e42514f425190c5 not found: ID does not exist" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.238977 4787 scope.go:117] "RemoveContainer" containerID="eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93" Feb 19 20:09:35 crc kubenswrapper[4787]: E0219 20:09:35.239193 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93\": container with ID starting with eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93 not found: ID does not exist" containerID="eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.239217 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93"} err="failed to get container status \"eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93\": rpc error: code = NotFound desc = could not find container \"eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93\": container with ID starting with eb6b9e8e103f3ea8d29fd60a85a39b6781020923e2bbf6fc74ba69dba6e83e93 not found: ID does not exist" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.239231 4787 scope.go:117] "RemoveContainer" containerID="6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122" Feb 19 20:09:35 crc kubenswrapper[4787]: E0219 20:09:35.239441 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122\": container with ID starting with 6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122 not found: ID does not exist" containerID="6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.239459 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122"} err="failed to get container status \"6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122\": rpc error: code = NotFound desc = could not find container \"6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122\": container with ID starting with 6728ff256fb8ca2ecdac9b1ee6fb25ddf470e8f3155a232564f56f7f2323a122 not found: ID does not exist" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.602911 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffcc863a-1983-4e29-8645-fb7faafb63a0" (UID: "ffcc863a-1983-4e29-8645-fb7faafb63a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.657527 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffcc863a-1983-4e29-8645-fb7faafb63a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.774945 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7s5vl"] Feb 19 20:09:35 crc kubenswrapper[4787]: I0219 20:09:35.785108 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7s5vl"] Feb 19 20:09:36 crc kubenswrapper[4787]: I0219 20:09:36.903813 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" path="/var/lib/kubelet/pods/ffcc863a-1983-4e29-8645-fb7faafb63a0/volumes" Feb 19 20:09:38 crc kubenswrapper[4787]: I0219 20:09:38.892625 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:09:38 crc kubenswrapper[4787]: E0219 20:09:38.893181 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:09:49 crc kubenswrapper[4787]: I0219 20:09:49.892030 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:09:49 crc kubenswrapper[4787]: E0219 20:09:49.894701 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:10:03 crc kubenswrapper[4787]: I0219 20:10:03.893168 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:10:03 crc kubenswrapper[4787]: E0219 20:10:03.894003 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:10:15 crc kubenswrapper[4787]: I0219 20:10:15.891482 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:10:15 crc kubenswrapper[4787]: E0219 20:10:15.892294 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:10:27 crc kubenswrapper[4787]: I0219 20:10:27.892644 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:10:27 crc kubenswrapper[4787]: E0219 20:10:27.893980 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:10:42 crc kubenswrapper[4787]: I0219 20:10:42.900916 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:10:42 crc kubenswrapper[4787]: E0219 20:10:42.901883 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:10:53 crc kubenswrapper[4787]: I0219 20:10:53.892647 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:10:53 crc kubenswrapper[4787]: E0219 20:10:53.893723 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:11:07 crc kubenswrapper[4787]: I0219 20:11:07.892697 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:11:07 crc kubenswrapper[4787]: E0219 20:11:07.893510 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:11:21 crc kubenswrapper[4787]: I0219 20:11:21.893426 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:11:21 crc kubenswrapper[4787]: E0219 20:11:21.894781 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:11:33 crc kubenswrapper[4787]: I0219 20:11:33.892668 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:11:33 crc kubenswrapper[4787]: E0219 20:11:33.893536 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:11:44 crc kubenswrapper[4787]: I0219 20:11:44.892356 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:11:44 crc kubenswrapper[4787]: E0219 20:11:44.893024 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:11:58 crc kubenswrapper[4787]: I0219 20:11:58.892426 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:11:58 crc kubenswrapper[4787]: E0219 20:11:58.894347 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:12:12 crc kubenswrapper[4787]: I0219 20:12:12.900246 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:12:13 crc kubenswrapper[4787]: I0219 20:12:13.903687 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"3bfbf101fd8e0f051d1d7411a2079b0ccb73f858ae4dad6b4039b35f4381ec81"} Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.178542 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8gqg2"] Feb 19 20:14:03 crc kubenswrapper[4787]: E0219 20:14:03.179557 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="extract-content" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.179571 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="extract-content" Feb 19 20:14:03 crc kubenswrapper[4787]: E0219 20:14:03.179600 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="extract-utilities" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.179626 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="extract-utilities" Feb 19 20:14:03 crc kubenswrapper[4787]: E0219 20:14:03.179641 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="registry-server" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.179647 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="registry-server" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.181121 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcc863a-1983-4e29-8645-fb7faafb63a0" containerName="registry-server" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.186127 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.199225 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gqg2"] Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.257547 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbz6c\" (UniqueName: \"kubernetes.io/projected/26ac488b-b330-4267-833b-94769fdecfad-kube-api-access-jbz6c\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.257673 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-catalog-content\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.257725 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-utilities\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.359769 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbz6c\" (UniqueName: \"kubernetes.io/projected/26ac488b-b330-4267-833b-94769fdecfad-kube-api-access-jbz6c\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.359883 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-catalog-content\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.359937 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-utilities\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.360344 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-catalog-content\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.360539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-utilities\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.385530 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbz6c\" (UniqueName: \"kubernetes.io/projected/26ac488b-b330-4267-833b-94769fdecfad-kube-api-access-jbz6c\") pod \"certified-operators-8gqg2\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:03 crc kubenswrapper[4787]: I0219 20:14:03.516232 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:04 crc kubenswrapper[4787]: I0219 20:14:04.153378 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gqg2"] Feb 19 20:14:05 crc kubenswrapper[4787]: I0219 20:14:05.105907 4787 generic.go:334] "Generic (PLEG): container finished" podID="26ac488b-b330-4267-833b-94769fdecfad" containerID="b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd" exitCode=0 Feb 19 20:14:05 crc kubenswrapper[4787]: I0219 20:14:05.105989 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerDied","Data":"b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd"} Feb 19 20:14:05 crc kubenswrapper[4787]: I0219 20:14:05.106560 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerStarted","Data":"53bf5e7b89f54dfe01f44b7c1731476b20b17aafd579b0cad578aad6a9aa55a0"} Feb 19 20:14:05 crc kubenswrapper[4787]: I0219 20:14:05.108041 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:14:07 crc kubenswrapper[4787]: I0219 20:14:07.128392 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerStarted","Data":"751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c"} Feb 19 20:14:14 crc kubenswrapper[4787]: I0219 20:14:14.209813 4787 generic.go:334] "Generic (PLEG): container finished" podID="26ac488b-b330-4267-833b-94769fdecfad" containerID="751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c" exitCode=0 Feb 19 20:14:14 crc kubenswrapper[4787]: I0219 20:14:14.209906 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerDied","Data":"751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c"} Feb 19 20:14:15 crc kubenswrapper[4787]: I0219 20:14:15.223054 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerStarted","Data":"c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d"} Feb 19 20:14:15 crc kubenswrapper[4787]: I0219 20:14:15.247775 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8gqg2" podStartSLOduration=2.697487431 podStartE2EDuration="12.247749407s" podCreationTimestamp="2026-02-19 20:14:03 +0000 UTC" firstStartedPulling="2026-02-19 20:14:05.107799368 +0000 UTC m=+3312.898465310" lastFinishedPulling="2026-02-19 20:14:14.658061344 +0000 UTC m=+3322.448727286" observedRunningTime="2026-02-19 20:14:15.240924723 +0000 UTC m=+3323.031590675" watchObservedRunningTime="2026-02-19 20:14:15.247749407 +0000 UTC m=+3323.038415349" Feb 19 20:14:23 crc kubenswrapper[4787]: I0219 20:14:23.517277 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:23 crc kubenswrapper[4787]: I0219 20:14:23.518309 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:24 crc kubenswrapper[4787]: I0219 20:14:24.568443 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8gqg2" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="registry-server" probeResult="failure" output=< Feb 19 20:14:24 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:14:24 crc kubenswrapper[4787]: > Feb 19 20:14:33 crc kubenswrapper[4787]: I0219 20:14:33.594399 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:33 crc kubenswrapper[4787]: I0219 20:14:33.647453 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:34 crc kubenswrapper[4787]: I0219 20:14:34.391011 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8gqg2"] Feb 19 20:14:35 crc kubenswrapper[4787]: I0219 20:14:35.446738 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8gqg2" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="registry-server" containerID="cri-o://c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d" gracePeriod=2 Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.047560 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.155872 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-utilities\") pod \"26ac488b-b330-4267-833b-94769fdecfad\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.156448 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbz6c\" (UniqueName: \"kubernetes.io/projected/26ac488b-b330-4267-833b-94769fdecfad-kube-api-access-jbz6c\") pod \"26ac488b-b330-4267-833b-94769fdecfad\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.156622 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-catalog-content\") pod \"26ac488b-b330-4267-833b-94769fdecfad\" (UID: \"26ac488b-b330-4267-833b-94769fdecfad\") " Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.157033 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-utilities" (OuterVolumeSpecName: "utilities") pod "26ac488b-b330-4267-833b-94769fdecfad" (UID: "26ac488b-b330-4267-833b-94769fdecfad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.157631 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.164846 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ac488b-b330-4267-833b-94769fdecfad-kube-api-access-jbz6c" (OuterVolumeSpecName: "kube-api-access-jbz6c") pod "26ac488b-b330-4267-833b-94769fdecfad" (UID: "26ac488b-b330-4267-833b-94769fdecfad"). InnerVolumeSpecName "kube-api-access-jbz6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.206820 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26ac488b-b330-4267-833b-94769fdecfad" (UID: "26ac488b-b330-4267-833b-94769fdecfad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.260917 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26ac488b-b330-4267-833b-94769fdecfad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.260951 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbz6c\" (UniqueName: \"kubernetes.io/projected/26ac488b-b330-4267-833b-94769fdecfad-kube-api-access-jbz6c\") on node \"crc\" DevicePath \"\"" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.458596 4787 generic.go:334] "Generic (PLEG): container finished" podID="26ac488b-b330-4267-833b-94769fdecfad" containerID="c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d" exitCode=0 Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.458643 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerDied","Data":"c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d"} Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.458683 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gqg2" event={"ID":"26ac488b-b330-4267-833b-94769fdecfad","Type":"ContainerDied","Data":"53bf5e7b89f54dfe01f44b7c1731476b20b17aafd579b0cad578aad6a9aa55a0"} Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.458708 4787 scope.go:117] "RemoveContainer" containerID="c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.458796 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gqg2" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.493161 4787 scope.go:117] "RemoveContainer" containerID="751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.500287 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8gqg2"] Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.513733 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8gqg2"] Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.518617 4787 scope.go:117] "RemoveContainer" containerID="b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.578711 4787 scope.go:117] "RemoveContainer" containerID="c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d" Feb 19 20:14:36 crc kubenswrapper[4787]: E0219 20:14:36.579137 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d\": container with ID starting with c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d not found: ID does not exist" containerID="c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.579185 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d"} err="failed to get container status \"c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d\": rpc error: code = NotFound desc = could not find container \"c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d\": container with ID starting with c3ab51f5274c4d81932a138c811dc1bfdec035e9e45f8e61e210c514db96493d not found: ID does not exist" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.579205 4787 scope.go:117] "RemoveContainer" containerID="751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c" Feb 19 20:14:36 crc kubenswrapper[4787]: E0219 20:14:36.579424 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c\": container with ID starting with 751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c not found: ID does not exist" containerID="751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.579443 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c"} err="failed to get container status \"751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c\": rpc error: code = NotFound desc = could not find container \"751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c\": container with ID starting with 751efa3a5591118b5de4287e269ffbce93e9b8fab5818ffafdc112ccf7625c5c not found: ID does not exist" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.579458 4787 scope.go:117] "RemoveContainer" containerID="b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd" Feb 19 20:14:36 crc kubenswrapper[4787]: E0219 20:14:36.579752 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd\": container with ID starting with b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd not found: ID does not exist" containerID="b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.579776 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd"} err="failed to get container status \"b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd\": rpc error: code = NotFound desc = could not find container \"b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd\": container with ID starting with b963df3c84542a82cd501babc00f56c5395a856cfbc246f498283e5264b065fd not found: ID does not exist" Feb 19 20:14:36 crc kubenswrapper[4787]: I0219 20:14:36.904596 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ac488b-b330-4267-833b-94769fdecfad" path="/var/lib/kubelet/pods/26ac488b-b330-4267-833b-94769fdecfad/volumes" Feb 19 20:14:39 crc kubenswrapper[4787]: I0219 20:14:39.263490 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:14:39 crc kubenswrapper[4787]: I0219 20:14:39.263830 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.164762 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm"] Feb 19 20:15:00 crc kubenswrapper[4787]: E0219 20:15:00.166005 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="extract-utilities" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.166025 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="extract-utilities" Feb 19 20:15:00 crc kubenswrapper[4787]: E0219 20:15:00.166045 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="registry-server" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.166052 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="registry-server" Feb 19 20:15:00 crc kubenswrapper[4787]: E0219 20:15:00.166098 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="extract-content" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.166105 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="extract-content" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.166452 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ac488b-b330-4267-833b-94769fdecfad" containerName="registry-server" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.167631 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.169944 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.170402 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.180023 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm"] Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.268858 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623075c5-5d08-4036-9e25-24b66d9c353a-secret-volume\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.269046 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k878d\" (UniqueName: \"kubernetes.io/projected/623075c5-5d08-4036-9e25-24b66d9c353a-kube-api-access-k878d\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.269174 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623075c5-5d08-4036-9e25-24b66d9c353a-config-volume\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.371686 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k878d\" (UniqueName: \"kubernetes.io/projected/623075c5-5d08-4036-9e25-24b66d9c353a-kube-api-access-k878d\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.372075 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623075c5-5d08-4036-9e25-24b66d9c353a-config-volume\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.372241 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623075c5-5d08-4036-9e25-24b66d9c353a-secret-volume\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.373045 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623075c5-5d08-4036-9e25-24b66d9c353a-config-volume\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.378770 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623075c5-5d08-4036-9e25-24b66d9c353a-secret-volume\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.389583 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k878d\" (UniqueName: \"kubernetes.io/projected/623075c5-5d08-4036-9e25-24b66d9c353a-kube-api-access-k878d\") pod \"collect-profiles-29525535-7b6cm\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.489142 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:00 crc kubenswrapper[4787]: I0219 20:15:00.993012 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm"] Feb 19 20:15:01 crc kubenswrapper[4787]: I0219 20:15:01.742792 4787 generic.go:334] "Generic (PLEG): container finished" podID="623075c5-5d08-4036-9e25-24b66d9c353a" containerID="d496884bfe7f7b7c58685a00479ff90dd0e5aa8dfbc8abfa70c4dd3c498a22ce" exitCode=0 Feb 19 20:15:01 crc kubenswrapper[4787]: I0219 20:15:01.743104 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" event={"ID":"623075c5-5d08-4036-9e25-24b66d9c353a","Type":"ContainerDied","Data":"d496884bfe7f7b7c58685a00479ff90dd0e5aa8dfbc8abfa70c4dd3c498a22ce"} Feb 19 20:15:01 crc kubenswrapper[4787]: I0219 20:15:01.743133 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" event={"ID":"623075c5-5d08-4036-9e25-24b66d9c353a","Type":"ContainerStarted","Data":"9ee2b2b1cf7b71414cf66a69a5ffc0e5534290a78d92b1954d24957999f606e0"} Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.325855 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.454332 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623075c5-5d08-4036-9e25-24b66d9c353a-config-volume\") pod \"623075c5-5d08-4036-9e25-24b66d9c353a\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.454737 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623075c5-5d08-4036-9e25-24b66d9c353a-secret-volume\") pod \"623075c5-5d08-4036-9e25-24b66d9c353a\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.454921 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k878d\" (UniqueName: \"kubernetes.io/projected/623075c5-5d08-4036-9e25-24b66d9c353a-kube-api-access-k878d\") pod \"623075c5-5d08-4036-9e25-24b66d9c353a\" (UID: \"623075c5-5d08-4036-9e25-24b66d9c353a\") " Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.455362 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623075c5-5d08-4036-9e25-24b66d9c353a-config-volume" (OuterVolumeSpecName: "config-volume") pod "623075c5-5d08-4036-9e25-24b66d9c353a" (UID: "623075c5-5d08-4036-9e25-24b66d9c353a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.456513 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/623075c5-5d08-4036-9e25-24b66d9c353a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.460731 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/623075c5-5d08-4036-9e25-24b66d9c353a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "623075c5-5d08-4036-9e25-24b66d9c353a" (UID: "623075c5-5d08-4036-9e25-24b66d9c353a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.465982 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623075c5-5d08-4036-9e25-24b66d9c353a-kube-api-access-k878d" (OuterVolumeSpecName: "kube-api-access-k878d") pod "623075c5-5d08-4036-9e25-24b66d9c353a" (UID: "623075c5-5d08-4036-9e25-24b66d9c353a"). InnerVolumeSpecName "kube-api-access-k878d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.558442 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/623075c5-5d08-4036-9e25-24b66d9c353a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.558479 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k878d\" (UniqueName: \"kubernetes.io/projected/623075c5-5d08-4036-9e25-24b66d9c353a-kube-api-access-k878d\") on node \"crc\" DevicePath \"\"" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.764487 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" event={"ID":"623075c5-5d08-4036-9e25-24b66d9c353a","Type":"ContainerDied","Data":"9ee2b2b1cf7b71414cf66a69a5ffc0e5534290a78d92b1954d24957999f606e0"} Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.764530 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee2b2b1cf7b71414cf66a69a5ffc0e5534290a78d92b1954d24957999f606e0" Feb 19 20:15:03 crc kubenswrapper[4787]: I0219 20:15:03.764633 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm" Feb 19 20:15:04 crc kubenswrapper[4787]: I0219 20:15:04.407449 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd"] Feb 19 20:15:04 crc kubenswrapper[4787]: I0219 20:15:04.419478 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525490-hxpjd"] Feb 19 20:15:04 crc kubenswrapper[4787]: I0219 20:15:04.922361 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62f9ef9-50b4-40d7-a4d6-41046d4fdf95" path="/var/lib/kubelet/pods/a62f9ef9-50b4-40d7-a4d6-41046d4fdf95/volumes" Feb 19 20:15:09 crc kubenswrapper[4787]: I0219 20:15:09.263976 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:15:09 crc kubenswrapper[4787]: I0219 20:15:09.264734 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:15:09 crc kubenswrapper[4787]: I0219 20:15:09.269253 4787 scope.go:117] "RemoveContainer" containerID="462a87db35f071b2f556abca0715b4980d3467b9c8471be41769c92482f7eb91" Feb 19 20:15:39 crc kubenswrapper[4787]: I0219 20:15:39.263072 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:15:39 crc kubenswrapper[4787]: I0219 20:15:39.263661 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:15:39 crc kubenswrapper[4787]: I0219 20:15:39.263707 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:15:39 crc kubenswrapper[4787]: I0219 20:15:39.264416 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bfbf101fd8e0f051d1d7411a2079b0ccb73f858ae4dad6b4039b35f4381ec81"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:15:39 crc kubenswrapper[4787]: I0219 20:15:39.264549 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://3bfbf101fd8e0f051d1d7411a2079b0ccb73f858ae4dad6b4039b35f4381ec81" gracePeriod=600 Feb 19 20:15:40 crc kubenswrapper[4787]: I0219 20:15:40.137154 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="3bfbf101fd8e0f051d1d7411a2079b0ccb73f858ae4dad6b4039b35f4381ec81" exitCode=0 Feb 19 20:15:40 crc kubenswrapper[4787]: I0219 20:15:40.137739 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"3bfbf101fd8e0f051d1d7411a2079b0ccb73f858ae4dad6b4039b35f4381ec81"} Feb 19 20:15:40 crc kubenswrapper[4787]: I0219 20:15:40.137794 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42"} Feb 19 20:15:40 crc kubenswrapper[4787]: I0219 20:15:40.137824 4787 scope.go:117] "RemoveContainer" containerID="9c2cb9cb585324a6f00cb9b08045a31a28008d0a3153a83222216e56f7b98d80" Feb 19 20:17:39 crc kubenswrapper[4787]: I0219 20:17:39.263709 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:17:39 crc kubenswrapper[4787]: I0219 20:17:39.264295 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:18:09 crc kubenswrapper[4787]: I0219 20:18:09.263213 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:18:09 crc kubenswrapper[4787]: I0219 20:18:09.263817 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.756944 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-np75l"] Feb 19 20:18:11 crc kubenswrapper[4787]: E0219 20:18:11.758049 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623075c5-5d08-4036-9e25-24b66d9c353a" containerName="collect-profiles" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.758062 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="623075c5-5d08-4036-9e25-24b66d9c353a" containerName="collect-profiles" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.758293 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="623075c5-5d08-4036-9e25-24b66d9c353a" containerName="collect-profiles" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.759881 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.770561 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np75l"] Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.919528 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gqtx\" (UniqueName: \"kubernetes.io/projected/88173a29-9ce3-48f9-960c-10781616a17b-kube-api-access-7gqtx\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.919880 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-catalog-content\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:11 crc kubenswrapper[4787]: I0219 20:18:11.919957 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-utilities\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.022434 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gqtx\" (UniqueName: \"kubernetes.io/projected/88173a29-9ce3-48f9-960c-10781616a17b-kube-api-access-7gqtx\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.022795 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-catalog-content\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.022838 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-utilities\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.023462 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-catalog-content\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.023489 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-utilities\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.047511 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gqtx\" (UniqueName: \"kubernetes.io/projected/88173a29-9ce3-48f9-960c-10781616a17b-kube-api-access-7gqtx\") pod \"redhat-operators-np75l\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.081739 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.583788 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np75l"] Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.828161 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerStarted","Data":"c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303"} Feb 19 20:18:12 crc kubenswrapper[4787]: I0219 20:18:12.828572 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerStarted","Data":"60aab8e454b2ccf1687b0839077c00e0f646ecda00d40b512f9f6400504cd0f6"} Feb 19 20:18:13 crc kubenswrapper[4787]: I0219 20:18:13.839302 4787 generic.go:334] "Generic (PLEG): container finished" podID="88173a29-9ce3-48f9-960c-10781616a17b" containerID="c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303" exitCode=0 Feb 19 20:18:13 crc kubenswrapper[4787]: I0219 20:18:13.839459 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerDied","Data":"c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303"} Feb 19 20:18:14 crc kubenswrapper[4787]: I0219 20:18:14.852708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerStarted","Data":"0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005"} Feb 19 20:18:19 crc kubenswrapper[4787]: I0219 20:18:19.907109 4787 generic.go:334] "Generic (PLEG): container finished" podID="88173a29-9ce3-48f9-960c-10781616a17b" containerID="0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005" exitCode=0 Feb 19 20:18:19 crc kubenswrapper[4787]: I0219 20:18:19.907192 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerDied","Data":"0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005"} Feb 19 20:18:20 crc kubenswrapper[4787]: I0219 20:18:20.922072 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerStarted","Data":"131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129"} Feb 19 20:18:20 crc kubenswrapper[4787]: I0219 20:18:20.950092 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-np75l" podStartSLOduration=3.454660293 podStartE2EDuration="9.950065414s" podCreationTimestamp="2026-02-19 20:18:11 +0000 UTC" firstStartedPulling="2026-02-19 20:18:13.841862894 +0000 UTC m=+3561.632528836" lastFinishedPulling="2026-02-19 20:18:20.337268025 +0000 UTC m=+3568.127933957" observedRunningTime="2026-02-19 20:18:20.943035024 +0000 UTC m=+3568.733700976" watchObservedRunningTime="2026-02-19 20:18:20.950065414 +0000 UTC m=+3568.740731356" Feb 19 20:18:22 crc kubenswrapper[4787]: I0219 20:18:22.082409 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:22 crc kubenswrapper[4787]: I0219 20:18:22.082475 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:23 crc kubenswrapper[4787]: I0219 20:18:23.137030 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-np75l" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="registry-server" probeResult="failure" output=< Feb 19 20:18:23 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:18:23 crc kubenswrapper[4787]: > Feb 19 20:18:33 crc kubenswrapper[4787]: I0219 20:18:33.132725 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-np75l" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="registry-server" probeResult="failure" output=< Feb 19 20:18:33 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:18:33 crc kubenswrapper[4787]: > Feb 19 20:18:39 crc kubenswrapper[4787]: I0219 20:18:39.262926 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:18:39 crc kubenswrapper[4787]: I0219 20:18:39.264295 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:18:39 crc kubenswrapper[4787]: I0219 20:18:39.264422 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:18:39 crc kubenswrapper[4787]: I0219 20:18:39.265489 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:18:39 crc kubenswrapper[4787]: I0219 20:18:39.265651 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" gracePeriod=600 Feb 19 20:18:39 crc kubenswrapper[4787]: E0219 20:18:39.392256 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:18:40 crc kubenswrapper[4787]: I0219 20:18:40.128057 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" exitCode=0 Feb 19 20:18:40 crc kubenswrapper[4787]: I0219 20:18:40.128118 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42"} Feb 19 20:18:40 crc kubenswrapper[4787]: I0219 20:18:40.128464 4787 scope.go:117] "RemoveContainer" containerID="3bfbf101fd8e0f051d1d7411a2079b0ccb73f858ae4dad6b4039b35f4381ec81" Feb 19 20:18:40 crc kubenswrapper[4787]: I0219 20:18:40.129463 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:18:40 crc kubenswrapper[4787]: E0219 20:18:40.129996 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:18:42 crc kubenswrapper[4787]: I0219 20:18:42.132654 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:42 crc kubenswrapper[4787]: I0219 20:18:42.195854 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:43 crc kubenswrapper[4787]: I0219 20:18:43.334230 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np75l"] Feb 19 20:18:43 crc kubenswrapper[4787]: I0219 20:18:43.346027 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-np75l" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="registry-server" containerID="cri-o://131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129" gracePeriod=2 Feb 19 20:18:43 crc kubenswrapper[4787]: I0219 20:18:43.964237 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.094647 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-utilities\") pod \"88173a29-9ce3-48f9-960c-10781616a17b\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.094963 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-catalog-content\") pod \"88173a29-9ce3-48f9-960c-10781616a17b\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.095051 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gqtx\" (UniqueName: \"kubernetes.io/projected/88173a29-9ce3-48f9-960c-10781616a17b-kube-api-access-7gqtx\") pod \"88173a29-9ce3-48f9-960c-10781616a17b\" (UID: \"88173a29-9ce3-48f9-960c-10781616a17b\") " Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.096286 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-utilities" (OuterVolumeSpecName: "utilities") pod "88173a29-9ce3-48f9-960c-10781616a17b" (UID: "88173a29-9ce3-48f9-960c-10781616a17b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.107095 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88173a29-9ce3-48f9-960c-10781616a17b-kube-api-access-7gqtx" (OuterVolumeSpecName: "kube-api-access-7gqtx") pod "88173a29-9ce3-48f9-960c-10781616a17b" (UID: "88173a29-9ce3-48f9-960c-10781616a17b"). InnerVolumeSpecName "kube-api-access-7gqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.198131 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gqtx\" (UniqueName: \"kubernetes.io/projected/88173a29-9ce3-48f9-960c-10781616a17b-kube-api-access-7gqtx\") on node \"crc\" DevicePath \"\"" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.198181 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.237196 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88173a29-9ce3-48f9-960c-10781616a17b" (UID: "88173a29-9ce3-48f9-960c-10781616a17b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.300110 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88173a29-9ce3-48f9-960c-10781616a17b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.357521 4787 generic.go:334] "Generic (PLEG): container finished" podID="88173a29-9ce3-48f9-960c-10781616a17b" containerID="131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129" exitCode=0 Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.357575 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerDied","Data":"131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129"} Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.357580 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np75l" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.357626 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np75l" event={"ID":"88173a29-9ce3-48f9-960c-10781616a17b","Type":"ContainerDied","Data":"60aab8e454b2ccf1687b0839077c00e0f646ecda00d40b512f9f6400504cd0f6"} Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.357651 4787 scope.go:117] "RemoveContainer" containerID="131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.396696 4787 scope.go:117] "RemoveContainer" containerID="0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.398330 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-np75l"] Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.409035 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-np75l"] Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.433375 4787 scope.go:117] "RemoveContainer" containerID="c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.475416 4787 scope.go:117] "RemoveContainer" containerID="131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129" Feb 19 20:18:44 crc kubenswrapper[4787]: E0219 20:18:44.475920 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129\": container with ID starting with 131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129 not found: ID does not exist" containerID="131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.475964 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129"} err="failed to get container status \"131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129\": rpc error: code = NotFound desc = could not find container \"131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129\": container with ID starting with 131ea5db7a2763141c1b9ec54977bc7c5f1270f4a5a91c7ce1044df1bced1129 not found: ID does not exist" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.475991 4787 scope.go:117] "RemoveContainer" containerID="0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005" Feb 19 20:18:44 crc kubenswrapper[4787]: E0219 20:18:44.476345 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005\": container with ID starting with 0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005 not found: ID does not exist" containerID="0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.476371 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005"} err="failed to get container status \"0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005\": rpc error: code = NotFound desc = could not find container \"0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005\": container with ID starting with 0227342529d01a49e6b5f63ede09eee1f47c6f493ecf63a1305ff8e9e351e005 not found: ID does not exist" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.476386 4787 scope.go:117] "RemoveContainer" containerID="c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303" Feb 19 20:18:44 crc kubenswrapper[4787]: E0219 20:18:44.476873 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303\": container with ID starting with c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303 not found: ID does not exist" containerID="c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.476913 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303"} err="failed to get container status \"c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303\": rpc error: code = NotFound desc = could not find container \"c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303\": container with ID starting with c2b62832d3a3410ca7fab76cc471bad1589c90e031ffb1433fb15dbb8baf3303 not found: ID does not exist" Feb 19 20:18:44 crc kubenswrapper[4787]: I0219 20:18:44.911095 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88173a29-9ce3-48f9-960c-10781616a17b" path="/var/lib/kubelet/pods/88173a29-9ce3-48f9-960c-10781616a17b/volumes" Feb 19 20:18:52 crc kubenswrapper[4787]: I0219 20:18:52.906792 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:18:52 crc kubenswrapper[4787]: E0219 20:18:52.907743 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:19:05 crc kubenswrapper[4787]: I0219 20:19:05.892409 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:19:05 crc kubenswrapper[4787]: E0219 20:19:05.893230 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:19:18 crc kubenswrapper[4787]: I0219 20:19:18.892204 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:19:18 crc kubenswrapper[4787]: E0219 20:19:18.893015 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:19:31 crc kubenswrapper[4787]: I0219 20:19:31.892702 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:19:31 crc kubenswrapper[4787]: E0219 20:19:31.893622 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:19:42 crc kubenswrapper[4787]: I0219 20:19:42.902362 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:19:42 crc kubenswrapper[4787]: E0219 20:19:42.903297 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:19:53 crc kubenswrapper[4787]: I0219 20:19:53.892967 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:19:53 crc kubenswrapper[4787]: E0219 20:19:53.893888 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:20:05 crc kubenswrapper[4787]: I0219 20:20:05.892480 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:20:05 crc kubenswrapper[4787]: E0219 20:20:05.893495 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:20:18 crc kubenswrapper[4787]: I0219 20:20:18.892641 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:20:18 crc kubenswrapper[4787]: E0219 20:20:18.894746 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.812141 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dw989"] Feb 19 20:20:30 crc kubenswrapper[4787]: E0219 20:20:30.813520 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="registry-server" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.813544 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="registry-server" Feb 19 20:20:30 crc kubenswrapper[4787]: E0219 20:20:30.813571 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="extract-content" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.813579 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="extract-content" Feb 19 20:20:30 crc kubenswrapper[4787]: E0219 20:20:30.813598 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="extract-utilities" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.813611 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="extract-utilities" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.813943 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="88173a29-9ce3-48f9-960c-10781616a17b" containerName="registry-server" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.816201 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.849933 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dw989"] Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.916362 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-utilities\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.916573 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72tlg\" (UniqueName: \"kubernetes.io/projected/cfa0eb2f-77b2-4031-8468-2fea98527c64-kube-api-access-72tlg\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:30 crc kubenswrapper[4787]: I0219 20:20:30.916693 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-catalog-content\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.019726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-catalog-content\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.020351 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-catalog-content\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.021361 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-utilities\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.021670 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72tlg\" (UniqueName: \"kubernetes.io/projected/cfa0eb2f-77b2-4031-8468-2fea98527c64-kube-api-access-72tlg\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.021771 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-utilities\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.051988 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72tlg\" (UniqueName: \"kubernetes.io/projected/cfa0eb2f-77b2-4031-8468-2fea98527c64-kube-api-access-72tlg\") pod \"community-operators-dw989\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.144186 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.741598 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dw989"] Feb 19 20:20:31 crc kubenswrapper[4787]: I0219 20:20:31.891743 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:20:31 crc kubenswrapper[4787]: E0219 20:20:31.892007 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:20:32 crc kubenswrapper[4787]: I0219 20:20:32.511375 4787 generic.go:334] "Generic (PLEG): container finished" podID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerID="8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d" exitCode=0 Feb 19 20:20:32 crc kubenswrapper[4787]: I0219 20:20:32.511722 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerDied","Data":"8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d"} Feb 19 20:20:32 crc kubenswrapper[4787]: I0219 20:20:32.511771 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerStarted","Data":"9212228bc7c012c79938a45a4a97200cf0a3eaa5708283e4575f10cd34fec031"} Feb 19 20:20:32 crc kubenswrapper[4787]: I0219 20:20:32.523822 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:20:33 crc kubenswrapper[4787]: I0219 20:20:33.523305 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerStarted","Data":"879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83"} Feb 19 20:20:35 crc kubenswrapper[4787]: I0219 20:20:35.543448 4787 generic.go:334] "Generic (PLEG): container finished" podID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerID="879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83" exitCode=0 Feb 19 20:20:35 crc kubenswrapper[4787]: I0219 20:20:35.543868 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerDied","Data":"879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83"} Feb 19 20:20:36 crc kubenswrapper[4787]: I0219 20:20:36.555654 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerStarted","Data":"d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535"} Feb 19 20:20:36 crc kubenswrapper[4787]: I0219 20:20:36.588646 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dw989" podStartSLOduration=3.154772157 podStartE2EDuration="6.588624518s" podCreationTimestamp="2026-02-19 20:20:30 +0000 UTC" firstStartedPulling="2026-02-19 20:20:32.522796098 +0000 UTC m=+3700.313462040" lastFinishedPulling="2026-02-19 20:20:35.956648459 +0000 UTC m=+3703.747314401" observedRunningTime="2026-02-19 20:20:36.576221515 +0000 UTC m=+3704.366887467" watchObservedRunningTime="2026-02-19 20:20:36.588624518 +0000 UTC m=+3704.379290460" Feb 19 20:20:41 crc kubenswrapper[4787]: I0219 20:20:41.144414 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:41 crc kubenswrapper[4787]: I0219 20:20:41.144938 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:41 crc kubenswrapper[4787]: I0219 20:20:41.194917 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:41 crc kubenswrapper[4787]: I0219 20:20:41.726934 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:41 crc kubenswrapper[4787]: I0219 20:20:41.829810 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dw989"] Feb 19 20:20:43 crc kubenswrapper[4787]: I0219 20:20:43.671601 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dw989" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="registry-server" containerID="cri-o://d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535" gracePeriod=2 Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.243430 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.298051 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72tlg\" (UniqueName: \"kubernetes.io/projected/cfa0eb2f-77b2-4031-8468-2fea98527c64-kube-api-access-72tlg\") pod \"cfa0eb2f-77b2-4031-8468-2fea98527c64\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.298502 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-catalog-content\") pod \"cfa0eb2f-77b2-4031-8468-2fea98527c64\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.298659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-utilities\") pod \"cfa0eb2f-77b2-4031-8468-2fea98527c64\" (UID: \"cfa0eb2f-77b2-4031-8468-2fea98527c64\") " Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.299267 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-utilities" (OuterVolumeSpecName: "utilities") pod "cfa0eb2f-77b2-4031-8468-2fea98527c64" (UID: "cfa0eb2f-77b2-4031-8468-2fea98527c64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.300041 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.316194 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa0eb2f-77b2-4031-8468-2fea98527c64-kube-api-access-72tlg" (OuterVolumeSpecName: "kube-api-access-72tlg") pod "cfa0eb2f-77b2-4031-8468-2fea98527c64" (UID: "cfa0eb2f-77b2-4031-8468-2fea98527c64"). InnerVolumeSpecName "kube-api-access-72tlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.361685 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfa0eb2f-77b2-4031-8468-2fea98527c64" (UID: "cfa0eb2f-77b2-4031-8468-2fea98527c64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.402474 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa0eb2f-77b2-4031-8468-2fea98527c64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.402521 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72tlg\" (UniqueName: \"kubernetes.io/projected/cfa0eb2f-77b2-4031-8468-2fea98527c64-kube-api-access-72tlg\") on node \"crc\" DevicePath \"\"" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.686717 4787 generic.go:334] "Generic (PLEG): container finished" podID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerID="d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535" exitCode=0 Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.686768 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dw989" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.686767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerDied","Data":"d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535"} Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.686926 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dw989" event={"ID":"cfa0eb2f-77b2-4031-8468-2fea98527c64","Type":"ContainerDied","Data":"9212228bc7c012c79938a45a4a97200cf0a3eaa5708283e4575f10cd34fec031"} Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.686964 4787 scope.go:117] "RemoveContainer" containerID="d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.726825 4787 scope.go:117] "RemoveContainer" containerID="879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.737472 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dw989"] Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.749924 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dw989"] Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.752551 4787 scope.go:117] "RemoveContainer" containerID="8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.808049 4787 scope.go:117] "RemoveContainer" containerID="d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535" Feb 19 20:20:44 crc kubenswrapper[4787]: E0219 20:20:44.808573 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535\": container with ID starting with d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535 not found: ID does not exist" containerID="d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.808636 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535"} err="failed to get container status \"d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535\": rpc error: code = NotFound desc = could not find container \"d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535\": container with ID starting with d4f77301e410c93459a8f6d81df82199d55550a328780e4e5833408977a79535 not found: ID does not exist" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.808673 4787 scope.go:117] "RemoveContainer" containerID="879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83" Feb 19 20:20:44 crc kubenswrapper[4787]: E0219 20:20:44.809049 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83\": container with ID starting with 879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83 not found: ID does not exist" containerID="879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.809084 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83"} err="failed to get container status \"879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83\": rpc error: code = NotFound desc = could not find container \"879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83\": container with ID starting with 879ae2fb145a92514d7c745d51a2ae3f63477803114a0884a465f8a792e4cb83 not found: ID does not exist" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.809115 4787 scope.go:117] "RemoveContainer" containerID="8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d" Feb 19 20:20:44 crc kubenswrapper[4787]: E0219 20:20:44.809600 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d\": container with ID starting with 8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d not found: ID does not exist" containerID="8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.809681 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d"} err="failed to get container status \"8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d\": rpc error: code = NotFound desc = could not find container \"8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d\": container with ID starting with 8e4baceb3f6076a9ab0051d55f51d64b95ea31674f642be9c873d7eb243fd49d not found: ID does not exist" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.912951 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:20:44 crc kubenswrapper[4787]: E0219 20:20:44.913658 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:20:44 crc kubenswrapper[4787]: I0219 20:20:44.934216 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" path="/var/lib/kubelet/pods/cfa0eb2f-77b2-4031-8468-2fea98527c64/volumes" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.856758 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cdsmc"] Feb 19 20:20:50 crc kubenswrapper[4787]: E0219 20:20:50.858099 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="extract-content" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.858119 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="extract-content" Feb 19 20:20:50 crc kubenswrapper[4787]: E0219 20:20:50.858138 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="extract-utilities" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.858147 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="extract-utilities" Feb 19 20:20:50 crc kubenswrapper[4787]: E0219 20:20:50.858171 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="registry-server" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.858180 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="registry-server" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.858488 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa0eb2f-77b2-4031-8468-2fea98527c64" containerName="registry-server" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.860349 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:50 crc kubenswrapper[4787]: I0219 20:20:50.881374 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdsmc"] Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.000440 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-catalog-content\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.000823 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-utilities\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.001036 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52qk\" (UniqueName: \"kubernetes.io/projected/e5730dd6-9b04-48d9-bca8-b3ea299333bd-kube-api-access-x52qk\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.102764 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-catalog-content\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.102850 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-utilities\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.102994 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x52qk\" (UniqueName: \"kubernetes.io/projected/e5730dd6-9b04-48d9-bca8-b3ea299333bd-kube-api-access-x52qk\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.103408 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-catalog-content\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.103517 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-utilities\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.133361 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x52qk\" (UniqueName: \"kubernetes.io/projected/e5730dd6-9b04-48d9-bca8-b3ea299333bd-kube-api-access-x52qk\") pod \"redhat-marketplace-cdsmc\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.182541 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.726383 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdsmc"] Feb 19 20:20:51 crc kubenswrapper[4787]: I0219 20:20:51.760134 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerStarted","Data":"35551ffc90f529a637a4c31d5803db0588585b4495d198b142554ec951dca885"} Feb 19 20:20:52 crc kubenswrapper[4787]: I0219 20:20:52.772163 4787 generic.go:334] "Generic (PLEG): container finished" podID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerID="25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34" exitCode=0 Feb 19 20:20:52 crc kubenswrapper[4787]: I0219 20:20:52.772264 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerDied","Data":"25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34"} Feb 19 20:20:53 crc kubenswrapper[4787]: I0219 20:20:53.784838 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerStarted","Data":"d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7"} Feb 19 20:20:54 crc kubenswrapper[4787]: I0219 20:20:54.800329 4787 generic.go:334] "Generic (PLEG): container finished" podID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerID="d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7" exitCode=0 Feb 19 20:20:54 crc kubenswrapper[4787]: I0219 20:20:54.800453 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerDied","Data":"d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7"} Feb 19 20:20:55 crc kubenswrapper[4787]: I0219 20:20:55.812940 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerStarted","Data":"088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f"} Feb 19 20:20:55 crc kubenswrapper[4787]: I0219 20:20:55.832456 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cdsmc" podStartSLOduration=3.329724487 podStartE2EDuration="5.832439184s" podCreationTimestamp="2026-02-19 20:20:50 +0000 UTC" firstStartedPulling="2026-02-19 20:20:52.775078355 +0000 UTC m=+3720.565744307" lastFinishedPulling="2026-02-19 20:20:55.277793062 +0000 UTC m=+3723.068459004" observedRunningTime="2026-02-19 20:20:55.831277551 +0000 UTC m=+3723.621943493" watchObservedRunningTime="2026-02-19 20:20:55.832439184 +0000 UTC m=+3723.623105126" Feb 19 20:20:59 crc kubenswrapper[4787]: I0219 20:20:59.892562 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:20:59 crc kubenswrapper[4787]: E0219 20:20:59.893470 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:21:01 crc kubenswrapper[4787]: I0219 20:21:01.183832 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:21:01 crc kubenswrapper[4787]: I0219 20:21:01.185259 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:21:01 crc kubenswrapper[4787]: I0219 20:21:01.232966 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:21:01 crc kubenswrapper[4787]: I0219 20:21:01.938051 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:21:03 crc kubenswrapper[4787]: I0219 20:21:03.447912 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdsmc"] Feb 19 20:21:03 crc kubenswrapper[4787]: I0219 20:21:03.913072 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cdsmc" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="registry-server" containerID="cri-o://088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f" gracePeriod=2 Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.552951 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.677779 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x52qk\" (UniqueName: \"kubernetes.io/projected/e5730dd6-9b04-48d9-bca8-b3ea299333bd-kube-api-access-x52qk\") pod \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.678079 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-catalog-content\") pod \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.678355 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-utilities\") pod \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\" (UID: \"e5730dd6-9b04-48d9-bca8-b3ea299333bd\") " Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.682963 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-utilities" (OuterVolumeSpecName: "utilities") pod "e5730dd6-9b04-48d9-bca8-b3ea299333bd" (UID: "e5730dd6-9b04-48d9-bca8-b3ea299333bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.687389 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5730dd6-9b04-48d9-bca8-b3ea299333bd-kube-api-access-x52qk" (OuterVolumeSpecName: "kube-api-access-x52qk") pod "e5730dd6-9b04-48d9-bca8-b3ea299333bd" (UID: "e5730dd6-9b04-48d9-bca8-b3ea299333bd"). InnerVolumeSpecName "kube-api-access-x52qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.723015 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5730dd6-9b04-48d9-bca8-b3ea299333bd" (UID: "e5730dd6-9b04-48d9-bca8-b3ea299333bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.791050 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.791107 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x52qk\" (UniqueName: \"kubernetes.io/projected/e5730dd6-9b04-48d9-bca8-b3ea299333bd-kube-api-access-x52qk\") on node \"crc\" DevicePath \"\"" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.791125 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5730dd6-9b04-48d9-bca8-b3ea299333bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.926115 4787 generic.go:334] "Generic (PLEG): container finished" podID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerID="088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f" exitCode=0 Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.926214 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerDied","Data":"088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f"} Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.926255 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cdsmc" event={"ID":"e5730dd6-9b04-48d9-bca8-b3ea299333bd","Type":"ContainerDied","Data":"35551ffc90f529a637a4c31d5803db0588585b4495d198b142554ec951dca885"} Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.926275 4787 scope.go:117] "RemoveContainer" containerID="088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.926507 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cdsmc" Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.959554 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdsmc"] Feb 19 20:21:04 crc kubenswrapper[4787]: I0219 20:21:04.973311 4787 scope.go:117] "RemoveContainer" containerID="d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.034159 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cdsmc"] Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.047333 4787 scope.go:117] "RemoveContainer" containerID="25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.126958 4787 scope.go:117] "RemoveContainer" containerID="088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f" Feb 19 20:21:05 crc kubenswrapper[4787]: E0219 20:21:05.137349 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f\": container with ID starting with 088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f not found: ID does not exist" containerID="088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.147596 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f"} err="failed to get container status \"088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f\": rpc error: code = NotFound desc = could not find container \"088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f\": container with ID starting with 088911a69770defb5844b7cbec749c0a170c65e7d02aa3920cadb9b3aebc525f not found: ID does not exist" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.147774 4787 scope.go:117] "RemoveContainer" containerID="d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7" Feb 19 20:21:05 crc kubenswrapper[4787]: E0219 20:21:05.148860 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7\": container with ID starting with d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7 not found: ID does not exist" containerID="d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.148913 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7"} err="failed to get container status \"d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7\": rpc error: code = NotFound desc = could not find container \"d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7\": container with ID starting with d8aecc42f723a62f75f0c752c1f05589767ba9c2d2e6576ec72a5c999e06a7e7 not found: ID does not exist" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.148978 4787 scope.go:117] "RemoveContainer" containerID="25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34" Feb 19 20:21:05 crc kubenswrapper[4787]: E0219 20:21:05.149589 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34\": container with ID starting with 25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34 not found: ID does not exist" containerID="25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34" Feb 19 20:21:05 crc kubenswrapper[4787]: I0219 20:21:05.149631 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34"} err="failed to get container status \"25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34\": rpc error: code = NotFound desc = could not find container \"25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34\": container with ID starting with 25476c0c08466fa671e8cdfea9136558bf5b46c5e245ad4ade8faa5594e82d34 not found: ID does not exist" Feb 19 20:21:06 crc kubenswrapper[4787]: I0219 20:21:06.903197 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" path="/var/lib/kubelet/pods/e5730dd6-9b04-48d9-bca8-b3ea299333bd/volumes" Feb 19 20:21:13 crc kubenswrapper[4787]: I0219 20:21:13.893191 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:21:13 crc kubenswrapper[4787]: E0219 20:21:13.893920 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:21:26 crc kubenswrapper[4787]: I0219 20:21:26.892094 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:21:26 crc kubenswrapper[4787]: E0219 20:21:26.892926 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:21:41 crc kubenswrapper[4787]: I0219 20:21:41.892002 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:21:41 crc kubenswrapper[4787]: E0219 20:21:41.892899 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:21:54 crc kubenswrapper[4787]: I0219 20:21:54.891849 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:21:54 crc kubenswrapper[4787]: E0219 20:21:54.892777 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:22:08 crc kubenswrapper[4787]: I0219 20:22:08.891737 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:22:08 crc kubenswrapper[4787]: E0219 20:22:08.892591 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:22:23 crc kubenswrapper[4787]: I0219 20:22:23.891738 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:22:23 crc kubenswrapper[4787]: E0219 20:22:23.892452 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:22:36 crc kubenswrapper[4787]: I0219 20:22:36.892711 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:22:36 crc kubenswrapper[4787]: E0219 20:22:36.893519 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:22:49 crc kubenswrapper[4787]: I0219 20:22:49.892018 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:22:49 crc kubenswrapper[4787]: E0219 20:22:49.893058 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:23:01 crc kubenswrapper[4787]: I0219 20:23:01.891844 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:23:01 crc kubenswrapper[4787]: E0219 20:23:01.892687 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:23:16 crc kubenswrapper[4787]: I0219 20:23:16.894107 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:23:16 crc kubenswrapper[4787]: E0219 20:23:16.896267 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:23:28 crc kubenswrapper[4787]: I0219 20:23:28.895435 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:23:28 crc kubenswrapper[4787]: E0219 20:23:28.896252 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:23:40 crc kubenswrapper[4787]: I0219 20:23:40.891978 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:23:41 crc kubenswrapper[4787]: I0219 20:23:41.636231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"e9cc9fae3f266cfac46b6d5cd47683932a3b9623ac8d694ecefb16f76cacd602"} Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.816320 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62h7v"] Feb 19 20:24:21 crc kubenswrapper[4787]: E0219 20:24:21.817453 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="extract-utilities" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.817470 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="extract-utilities" Feb 19 20:24:21 crc kubenswrapper[4787]: E0219 20:24:21.817505 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="registry-server" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.817511 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="registry-server" Feb 19 20:24:21 crc kubenswrapper[4787]: E0219 20:24:21.817546 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="extract-content" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.817553 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="extract-content" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.817796 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5730dd6-9b04-48d9-bca8-b3ea299333bd" containerName="registry-server" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.819427 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.837263 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62h7v"] Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.956241 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-catalog-content\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.956380 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4nl\" (UniqueName: \"kubernetes.io/projected/cf1dfe53-0856-4f24-a419-76876877d2fc-kube-api-access-nl4nl\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:21 crc kubenswrapper[4787]: I0219 20:24:21.956513 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-utilities\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.059252 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-catalog-content\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.059341 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4nl\" (UniqueName: \"kubernetes.io/projected/cf1dfe53-0856-4f24-a419-76876877d2fc-kube-api-access-nl4nl\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.059428 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-utilities\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.059827 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-catalog-content\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.059939 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-utilities\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.087496 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4nl\" (UniqueName: \"kubernetes.io/projected/cf1dfe53-0856-4f24-a419-76876877d2fc-kube-api-access-nl4nl\") pod \"certified-operators-62h7v\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.156466 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:22 crc kubenswrapper[4787]: I0219 20:24:22.690883 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62h7v"] Feb 19 20:24:23 crc kubenswrapper[4787]: I0219 20:24:23.074580 4787 generic.go:334] "Generic (PLEG): container finished" podID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerID="262d003d5aa5a0ab1a30375d7ec96d984ece9d4fdbc3bfdacd54c79c04991ff3" exitCode=0 Feb 19 20:24:23 crc kubenswrapper[4787]: I0219 20:24:23.074920 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerDied","Data":"262d003d5aa5a0ab1a30375d7ec96d984ece9d4fdbc3bfdacd54c79c04991ff3"} Feb 19 20:24:23 crc kubenswrapper[4787]: I0219 20:24:23.074955 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerStarted","Data":"6411aedc567261b417aff791a74d70a731912f9f98845b83443adfdea80d3c29"} Feb 19 20:24:24 crc kubenswrapper[4787]: I0219 20:24:24.087046 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerStarted","Data":"1f6c1a1ed7c4262ecd3d0ef0b10c6472f1f14b9ae9216329527905f3b4ac77fd"} Feb 19 20:24:27 crc kubenswrapper[4787]: I0219 20:24:27.119781 4787 generic.go:334] "Generic (PLEG): container finished" podID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerID="1f6c1a1ed7c4262ecd3d0ef0b10c6472f1f14b9ae9216329527905f3b4ac77fd" exitCode=0 Feb 19 20:24:27 crc kubenswrapper[4787]: I0219 20:24:27.119846 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerDied","Data":"1f6c1a1ed7c4262ecd3d0ef0b10c6472f1f14b9ae9216329527905f3b4ac77fd"} Feb 19 20:24:29 crc kubenswrapper[4787]: I0219 20:24:29.154074 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerStarted","Data":"2a06e3866f238c9a84b78d56fc10634d746e58684d09b7c043a82a041be482c8"} Feb 19 20:24:29 crc kubenswrapper[4787]: I0219 20:24:29.182480 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62h7v" podStartSLOduration=3.7390157950000003 podStartE2EDuration="8.18245531s" podCreationTimestamp="2026-02-19 20:24:21 +0000 UTC" firstStartedPulling="2026-02-19 20:24:23.076954833 +0000 UTC m=+3930.867620775" lastFinishedPulling="2026-02-19 20:24:27.520394348 +0000 UTC m=+3935.311060290" observedRunningTime="2026-02-19 20:24:29.178471267 +0000 UTC m=+3936.969137209" watchObservedRunningTime="2026-02-19 20:24:29.18245531 +0000 UTC m=+3936.973121252" Feb 19 20:24:32 crc kubenswrapper[4787]: I0219 20:24:32.157081 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:32 crc kubenswrapper[4787]: I0219 20:24:32.157655 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:32 crc kubenswrapper[4787]: I0219 20:24:32.218058 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:42 crc kubenswrapper[4787]: I0219 20:24:42.222208 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:42 crc kubenswrapper[4787]: I0219 20:24:42.289397 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62h7v"] Feb 19 20:24:42 crc kubenswrapper[4787]: I0219 20:24:42.312462 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62h7v" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="registry-server" containerID="cri-o://2a06e3866f238c9a84b78d56fc10634d746e58684d09b7c043a82a041be482c8" gracePeriod=2 Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.340258 4787 generic.go:334] "Generic (PLEG): container finished" podID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerID="2a06e3866f238c9a84b78d56fc10634d746e58684d09b7c043a82a041be482c8" exitCode=0 Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.340381 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerDied","Data":"2a06e3866f238c9a84b78d56fc10634d746e58684d09b7c043a82a041be482c8"} Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.733272 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.822947 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl4nl\" (UniqueName: \"kubernetes.io/projected/cf1dfe53-0856-4f24-a419-76876877d2fc-kube-api-access-nl4nl\") pod \"cf1dfe53-0856-4f24-a419-76876877d2fc\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.823031 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-utilities\") pod \"cf1dfe53-0856-4f24-a419-76876877d2fc\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.823953 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-utilities" (OuterVolumeSpecName: "utilities") pod "cf1dfe53-0856-4f24-a419-76876877d2fc" (UID: "cf1dfe53-0856-4f24-a419-76876877d2fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.824128 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-catalog-content\") pod \"cf1dfe53-0856-4f24-a419-76876877d2fc\" (UID: \"cf1dfe53-0856-4f24-a419-76876877d2fc\") " Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.825221 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.848737 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1dfe53-0856-4f24-a419-76876877d2fc-kube-api-access-nl4nl" (OuterVolumeSpecName: "kube-api-access-nl4nl") pod "cf1dfe53-0856-4f24-a419-76876877d2fc" (UID: "cf1dfe53-0856-4f24-a419-76876877d2fc"). InnerVolumeSpecName "kube-api-access-nl4nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.881392 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf1dfe53-0856-4f24-a419-76876877d2fc" (UID: "cf1dfe53-0856-4f24-a419-76876877d2fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.927590 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf1dfe53-0856-4f24-a419-76876877d2fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:43 crc kubenswrapper[4787]: I0219 20:24:43.927901 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl4nl\" (UniqueName: \"kubernetes.io/projected/cf1dfe53-0856-4f24-a419-76876877d2fc-kube-api-access-nl4nl\") on node \"crc\" DevicePath \"\"" Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.353210 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62h7v" event={"ID":"cf1dfe53-0856-4f24-a419-76876877d2fc","Type":"ContainerDied","Data":"6411aedc567261b417aff791a74d70a731912f9f98845b83443adfdea80d3c29"} Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.353826 4787 scope.go:117] "RemoveContainer" containerID="2a06e3866f238c9a84b78d56fc10634d746e58684d09b7c043a82a041be482c8" Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.353273 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62h7v" Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.377613 4787 scope.go:117] "RemoveContainer" containerID="1f6c1a1ed7c4262ecd3d0ef0b10c6472f1f14b9ae9216329527905f3b4ac77fd" Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.393111 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62h7v"] Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.404975 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62h7v"] Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.410745 4787 scope.go:117] "RemoveContainer" containerID="262d003d5aa5a0ab1a30375d7ec96d984ece9d4fdbc3bfdacd54c79c04991ff3" Feb 19 20:24:44 crc kubenswrapper[4787]: I0219 20:24:44.907866 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" path="/var/lib/kubelet/pods/cf1dfe53-0856-4f24-a419-76876877d2fc/volumes" Feb 19 20:26:09 crc kubenswrapper[4787]: I0219 20:26:09.263027 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:26:09 crc kubenswrapper[4787]: I0219 20:26:09.263698 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:26:39 crc kubenswrapper[4787]: I0219 20:26:39.263403 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:26:39 crc kubenswrapper[4787]: I0219 20:26:39.264069 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:27:09 crc kubenswrapper[4787]: I0219 20:27:09.263069 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:27:09 crc kubenswrapper[4787]: I0219 20:27:09.263723 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:27:09 crc kubenswrapper[4787]: I0219 20:27:09.263787 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:27:09 crc kubenswrapper[4787]: I0219 20:27:09.909785 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9cc9fae3f266cfac46b6d5cd47683932a3b9623ac8d694ecefb16f76cacd602"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:27:09 crc kubenswrapper[4787]: I0219 20:27:09.910161 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://e9cc9fae3f266cfac46b6d5cd47683932a3b9623ac8d694ecefb16f76cacd602" gracePeriod=600 Feb 19 20:27:10 crc kubenswrapper[4787]: I0219 20:27:10.920271 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="e9cc9fae3f266cfac46b6d5cd47683932a3b9623ac8d694ecefb16f76cacd602" exitCode=0 Feb 19 20:27:10 crc kubenswrapper[4787]: I0219 20:27:10.921767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"e9cc9fae3f266cfac46b6d5cd47683932a3b9623ac8d694ecefb16f76cacd602"} Feb 19 20:27:10 crc kubenswrapper[4787]: I0219 20:27:10.921845 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7"} Feb 19 20:27:10 crc kubenswrapper[4787]: I0219 20:27:10.921872 4787 scope.go:117] "RemoveContainer" containerID="4098833290a4af5a4944f2aad5771a99fc61c5f158813c9e371705e037731e42" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.056451 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pq64"] Feb 19 20:28:24 crc kubenswrapper[4787]: E0219 20:28:24.057556 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="extract-utilities" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.057575 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="extract-utilities" Feb 19 20:28:24 crc kubenswrapper[4787]: E0219 20:28:24.057632 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="registry-server" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.057640 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="registry-server" Feb 19 20:28:24 crc kubenswrapper[4787]: E0219 20:28:24.057663 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="extract-content" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.057670 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="extract-content" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.057918 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1dfe53-0856-4f24-a419-76876877d2fc" containerName="registry-server" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.060386 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.082180 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pq64"] Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.187730 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-utilities\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.187993 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-catalog-content\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.188035 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpqwj\" (UniqueName: \"kubernetes.io/projected/83185051-ef33-4efe-95bb-1894c78764c8-kube-api-access-wpqwj\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.290386 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-catalog-content\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.290432 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpqwj\" (UniqueName: \"kubernetes.io/projected/83185051-ef33-4efe-95bb-1894c78764c8-kube-api-access-wpqwj\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.290587 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-utilities\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.290934 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-catalog-content\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.291070 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-utilities\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.539040 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpqwj\" (UniqueName: \"kubernetes.io/projected/83185051-ef33-4efe-95bb-1894c78764c8-kube-api-access-wpqwj\") pod \"redhat-operators-5pq64\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:24 crc kubenswrapper[4787]: I0219 20:28:24.684430 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:25 crc kubenswrapper[4787]: I0219 20:28:25.234642 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pq64"] Feb 19 20:28:25 crc kubenswrapper[4787]: I0219 20:28:25.708840 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerDied","Data":"9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3"} Feb 19 20:28:25 crc kubenswrapper[4787]: I0219 20:28:25.709375 4787 generic.go:334] "Generic (PLEG): container finished" podID="83185051-ef33-4efe-95bb-1894c78764c8" containerID="9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3" exitCode=0 Feb 19 20:28:25 crc kubenswrapper[4787]: I0219 20:28:25.709433 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerStarted","Data":"3508a2185d992a00b68b820ecd1731e3439bfde2fe801fa290e8122d6bbe6be5"} Feb 19 20:28:25 crc kubenswrapper[4787]: I0219 20:28:25.711689 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:28:27 crc kubenswrapper[4787]: I0219 20:28:27.736290 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerStarted","Data":"c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca"} Feb 19 20:28:32 crc kubenswrapper[4787]: I0219 20:28:32.793719 4787 generic.go:334] "Generic (PLEG): container finished" podID="83185051-ef33-4efe-95bb-1894c78764c8" containerID="c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca" exitCode=0 Feb 19 20:28:32 crc kubenswrapper[4787]: I0219 20:28:32.793817 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerDied","Data":"c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca"} Feb 19 20:28:34 crc kubenswrapper[4787]: I0219 20:28:34.816824 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerStarted","Data":"342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8"} Feb 19 20:28:34 crc kubenswrapper[4787]: I0219 20:28:34.853099 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pq64" podStartSLOduration=2.280067184 podStartE2EDuration="10.853078546s" podCreationTimestamp="2026-02-19 20:28:24 +0000 UTC" firstStartedPulling="2026-02-19 20:28:25.711310196 +0000 UTC m=+4173.501976138" lastFinishedPulling="2026-02-19 20:28:34.284321558 +0000 UTC m=+4182.074987500" observedRunningTime="2026-02-19 20:28:34.842458305 +0000 UTC m=+4182.633124257" watchObservedRunningTime="2026-02-19 20:28:34.853078546 +0000 UTC m=+4182.643744488" Feb 19 20:28:44 crc kubenswrapper[4787]: I0219 20:28:44.685540 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:44 crc kubenswrapper[4787]: I0219 20:28:44.686157 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:45 crc kubenswrapper[4787]: I0219 20:28:45.731502 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5pq64" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="registry-server" probeResult="failure" output=< Feb 19 20:28:45 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:28:45 crc kubenswrapper[4787]: > Feb 19 20:28:54 crc kubenswrapper[4787]: I0219 20:28:54.740658 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:54 crc kubenswrapper[4787]: I0219 20:28:54.861576 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:55 crc kubenswrapper[4787]: I0219 20:28:55.256394 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pq64"] Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.033556 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pq64" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="registry-server" containerID="cri-o://342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8" gracePeriod=2 Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.790644 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.916563 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-catalog-content\") pod \"83185051-ef33-4efe-95bb-1894c78764c8\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.917133 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpqwj\" (UniqueName: \"kubernetes.io/projected/83185051-ef33-4efe-95bb-1894c78764c8-kube-api-access-wpqwj\") pod \"83185051-ef33-4efe-95bb-1894c78764c8\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.917240 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-utilities\") pod \"83185051-ef33-4efe-95bb-1894c78764c8\" (UID: \"83185051-ef33-4efe-95bb-1894c78764c8\") " Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.917910 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-utilities" (OuterVolumeSpecName: "utilities") pod "83185051-ef33-4efe-95bb-1894c78764c8" (UID: "83185051-ef33-4efe-95bb-1894c78764c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:28:56 crc kubenswrapper[4787]: I0219 20:28:56.926921 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83185051-ef33-4efe-95bb-1894c78764c8-kube-api-access-wpqwj" (OuterVolumeSpecName: "kube-api-access-wpqwj") pod "83185051-ef33-4efe-95bb-1894c78764c8" (UID: "83185051-ef33-4efe-95bb-1894c78764c8"). InnerVolumeSpecName "kube-api-access-wpqwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.020967 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpqwj\" (UniqueName: \"kubernetes.io/projected/83185051-ef33-4efe-95bb-1894c78764c8-kube-api-access-wpqwj\") on node \"crc\" DevicePath \"\"" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.021006 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.046266 4787 generic.go:334] "Generic (PLEG): container finished" podID="83185051-ef33-4efe-95bb-1894c78764c8" containerID="342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8" exitCode=0 Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.046307 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerDied","Data":"342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8"} Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.046333 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pq64" event={"ID":"83185051-ef33-4efe-95bb-1894c78764c8","Type":"ContainerDied","Data":"3508a2185d992a00b68b820ecd1731e3439bfde2fe801fa290e8122d6bbe6be5"} Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.046348 4787 scope.go:117] "RemoveContainer" containerID="342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.046476 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pq64" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.073555 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83185051-ef33-4efe-95bb-1894c78764c8" (UID: "83185051-ef33-4efe-95bb-1894c78764c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.084115 4787 scope.go:117] "RemoveContainer" containerID="c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.108251 4787 scope.go:117] "RemoveContainer" containerID="9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.123075 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83185051-ef33-4efe-95bb-1894c78764c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.181548 4787 scope.go:117] "RemoveContainer" containerID="342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8" Feb 19 20:28:57 crc kubenswrapper[4787]: E0219 20:28:57.182020 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8\": container with ID starting with 342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8 not found: ID does not exist" containerID="342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.182047 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8"} err="failed to get container status \"342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8\": rpc error: code = NotFound desc = could not find container \"342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8\": container with ID starting with 342f5007990c2e204f8ba10b40a41049f54e64084408264affd7486e8aa71ab8 not found: ID does not exist" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.182067 4787 scope.go:117] "RemoveContainer" containerID="c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca" Feb 19 20:28:57 crc kubenswrapper[4787]: E0219 20:28:57.182263 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca\": container with ID starting with c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca not found: ID does not exist" containerID="c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.182288 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca"} err="failed to get container status \"c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca\": rpc error: code = NotFound desc = could not find container \"c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca\": container with ID starting with c43499eadc037f6729a3f86cd68af12af801e549b618f5f7d29e70a6dc4eb4ca not found: ID does not exist" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.182299 4787 scope.go:117] "RemoveContainer" containerID="9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3" Feb 19 20:28:57 crc kubenswrapper[4787]: E0219 20:28:57.182700 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3\": container with ID starting with 9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3 not found: ID does not exist" containerID="9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.182718 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3"} err="failed to get container status \"9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3\": rpc error: code = NotFound desc = could not find container \"9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3\": container with ID starting with 9d0fc69956665fd5e7f1c7e78ad9b6d0f6c0b9f5ad5e43c722b35c37ccbe96f3 not found: ID does not exist" Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.416751 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pq64"] Feb 19 20:28:57 crc kubenswrapper[4787]: I0219 20:28:57.426217 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pq64"] Feb 19 20:28:58 crc kubenswrapper[4787]: I0219 20:28:58.924796 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83185051-ef33-4efe-95bb-1894c78764c8" path="/var/lib/kubelet/pods/83185051-ef33-4efe-95bb-1894c78764c8/volumes" Feb 19 20:29:39 crc kubenswrapper[4787]: I0219 20:29:39.263531 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:29:39 crc kubenswrapper[4787]: I0219 20:29:39.264229 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.202303 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr"] Feb 19 20:30:00 crc kubenswrapper[4787]: E0219 20:30:00.204238 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="registry-server" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.204267 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="registry-server" Feb 19 20:30:00 crc kubenswrapper[4787]: E0219 20:30:00.204287 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="extract-content" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.204294 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="extract-content" Feb 19 20:30:00 crc kubenswrapper[4787]: E0219 20:30:00.204311 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="extract-utilities" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.204317 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="extract-utilities" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.204578 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="83185051-ef33-4efe-95bb-1894c78764c8" containerName="registry-server" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.205378 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.208787 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.209368 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.215521 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr"] Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.387559 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3ff382-2124-41ae-a329-b88f9d915a3a-secret-volume\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.387878 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl8mt\" (UniqueName: \"kubernetes.io/projected/ae3ff382-2124-41ae-a329-b88f9d915a3a-kube-api-access-sl8mt\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.387907 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3ff382-2124-41ae-a329-b88f9d915a3a-config-volume\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.490467 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl8mt\" (UniqueName: \"kubernetes.io/projected/ae3ff382-2124-41ae-a329-b88f9d915a3a-kube-api-access-sl8mt\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.490533 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3ff382-2124-41ae-a329-b88f9d915a3a-config-volume\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.490842 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3ff382-2124-41ae-a329-b88f9d915a3a-secret-volume\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.492010 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3ff382-2124-41ae-a329-b88f9d915a3a-config-volume\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.538678 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3ff382-2124-41ae-a329-b88f9d915a3a-secret-volume\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.543417 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl8mt\" (UniqueName: \"kubernetes.io/projected/ae3ff382-2124-41ae-a329-b88f9d915a3a-kube-api-access-sl8mt\") pod \"collect-profiles-29525550-7jkrr\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:00 crc kubenswrapper[4787]: I0219 20:30:00.831634 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:01 crc kubenswrapper[4787]: I0219 20:30:01.340446 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr"] Feb 19 20:30:01 crc kubenswrapper[4787]: I0219 20:30:01.849955 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" event={"ID":"ae3ff382-2124-41ae-a329-b88f9d915a3a","Type":"ContainerStarted","Data":"c15238680dd41de96e2d0a8b7efa5ce6c26866f9d9260c0a46e0d19dc2655ed1"} Feb 19 20:30:01 crc kubenswrapper[4787]: I0219 20:30:01.850002 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" event={"ID":"ae3ff382-2124-41ae-a329-b88f9d915a3a","Type":"ContainerStarted","Data":"676f7f57265402a39607d6a1c9da45f90b96873a2e817d37263605b87f683048"} Feb 19 20:30:01 crc kubenswrapper[4787]: I0219 20:30:01.884056 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" podStartSLOduration=1.884035025 podStartE2EDuration="1.884035025s" podCreationTimestamp="2026-02-19 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 20:30:01.876382808 +0000 UTC m=+4269.667048760" watchObservedRunningTime="2026-02-19 20:30:01.884035025 +0000 UTC m=+4269.674700967" Feb 19 20:30:02 crc kubenswrapper[4787]: I0219 20:30:02.873858 4787 generic.go:334] "Generic (PLEG): container finished" podID="ae3ff382-2124-41ae-a329-b88f9d915a3a" containerID="c15238680dd41de96e2d0a8b7efa5ce6c26866f9d9260c0a46e0d19dc2655ed1" exitCode=0 Feb 19 20:30:02 crc kubenswrapper[4787]: I0219 20:30:02.874133 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" event={"ID":"ae3ff382-2124-41ae-a329-b88f9d915a3a","Type":"ContainerDied","Data":"c15238680dd41de96e2d0a8b7efa5ce6c26866f9d9260c0a46e0d19dc2655ed1"} Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.350829 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.407375 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3ff382-2124-41ae-a329-b88f9d915a3a-secret-volume\") pod \"ae3ff382-2124-41ae-a329-b88f9d915a3a\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.407477 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl8mt\" (UniqueName: \"kubernetes.io/projected/ae3ff382-2124-41ae-a329-b88f9d915a3a-kube-api-access-sl8mt\") pod \"ae3ff382-2124-41ae-a329-b88f9d915a3a\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.407587 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3ff382-2124-41ae-a329-b88f9d915a3a-config-volume\") pod \"ae3ff382-2124-41ae-a329-b88f9d915a3a\" (UID: \"ae3ff382-2124-41ae-a329-b88f9d915a3a\") " Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.409055 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae3ff382-2124-41ae-a329-b88f9d915a3a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae3ff382-2124-41ae-a329-b88f9d915a3a" (UID: "ae3ff382-2124-41ae-a329-b88f9d915a3a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.414578 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3ff382-2124-41ae-a329-b88f9d915a3a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae3ff382-2124-41ae-a329-b88f9d915a3a" (UID: "ae3ff382-2124-41ae-a329-b88f9d915a3a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.416408 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3ff382-2124-41ae-a329-b88f9d915a3a-kube-api-access-sl8mt" (OuterVolumeSpecName: "kube-api-access-sl8mt") pod "ae3ff382-2124-41ae-a329-b88f9d915a3a" (UID: "ae3ff382-2124-41ae-a329-b88f9d915a3a"). InnerVolumeSpecName "kube-api-access-sl8mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.510927 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae3ff382-2124-41ae-a329-b88f9d915a3a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.511003 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl8mt\" (UniqueName: \"kubernetes.io/projected/ae3ff382-2124-41ae-a329-b88f9d915a3a-kube-api-access-sl8mt\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.511016 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae3ff382-2124-41ae-a329-b88f9d915a3a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.896863 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.909402 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525550-7jkrr" event={"ID":"ae3ff382-2124-41ae-a329-b88f9d915a3a","Type":"ContainerDied","Data":"676f7f57265402a39607d6a1c9da45f90b96873a2e817d37263605b87f683048"} Feb 19 20:30:04 crc kubenswrapper[4787]: I0219 20:30:04.909455 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676f7f57265402a39607d6a1c9da45f90b96873a2e817d37263605b87f683048" Feb 19 20:30:05 crc kubenswrapper[4787]: I0219 20:30:05.436228 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw"] Feb 19 20:30:05 crc kubenswrapper[4787]: I0219 20:30:05.447185 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525505-t5vdw"] Feb 19 20:30:06 crc kubenswrapper[4787]: I0219 20:30:06.910543 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79bad24-7b6a-46d9-8ee4-c710aba23e86" path="/var/lib/kubelet/pods/e79bad24-7b6a-46d9-8ee4-c710aba23e86/volumes" Feb 19 20:30:09 crc kubenswrapper[4787]: I0219 20:30:09.262955 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:30:09 crc kubenswrapper[4787]: I0219 20:30:09.263491 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:30:09 crc kubenswrapper[4787]: I0219 20:30:09.799543 4787 scope.go:117] "RemoveContainer" containerID="3a84e047ab67813124193ff3588582449e9c8a05bfbf412bd82fc6bcb0a09317" Feb 19 20:30:39 crc kubenswrapper[4787]: I0219 20:30:39.263295 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:30:39 crc kubenswrapper[4787]: I0219 20:30:39.263890 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:30:39 crc kubenswrapper[4787]: I0219 20:30:39.263945 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:30:39 crc kubenswrapper[4787]: I0219 20:30:39.264877 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:30:39 crc kubenswrapper[4787]: I0219 20:30:39.264936 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" gracePeriod=600 Feb 19 20:30:39 crc kubenswrapper[4787]: E0219 20:30:39.386279 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:30:40 crc kubenswrapper[4787]: I0219 20:30:40.297690 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" exitCode=0 Feb 19 20:30:40 crc kubenswrapper[4787]: I0219 20:30:40.298104 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7"} Feb 19 20:30:40 crc kubenswrapper[4787]: I0219 20:30:40.298170 4787 scope.go:117] "RemoveContainer" containerID="e9cc9fae3f266cfac46b6d5cd47683932a3b9623ac8d694ecefb16f76cacd602" Feb 19 20:30:40 crc kubenswrapper[4787]: I0219 20:30:40.299191 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:30:40 crc kubenswrapper[4787]: E0219 20:30:40.299792 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:30:52 crc kubenswrapper[4787]: I0219 20:30:52.892069 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:30:52 crc kubenswrapper[4787]: E0219 20:30:52.892962 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:31:06 crc kubenswrapper[4787]: I0219 20:31:06.892768 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:31:06 crc kubenswrapper[4787]: E0219 20:31:06.893559 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:31:18 crc kubenswrapper[4787]: I0219 20:31:18.891985 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:31:18 crc kubenswrapper[4787]: E0219 20:31:18.892958 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:31:29 crc kubenswrapper[4787]: I0219 20:31:29.892523 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:31:29 crc kubenswrapper[4787]: E0219 20:31:29.893356 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:31:43 crc kubenswrapper[4787]: I0219 20:31:43.891692 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:31:43 crc kubenswrapper[4787]: E0219 20:31:43.892401 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:31:56 crc kubenswrapper[4787]: I0219 20:31:56.892502 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:31:56 crc kubenswrapper[4787]: E0219 20:31:56.893566 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:32:11 crc kubenswrapper[4787]: I0219 20:32:11.892602 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:32:11 crc kubenswrapper[4787]: E0219 20:32:11.893482 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:32:22 crc kubenswrapper[4787]: I0219 20:32:22.902769 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:32:22 crc kubenswrapper[4787]: E0219 20:32:22.904663 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:32:34 crc kubenswrapper[4787]: I0219 20:32:34.894076 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:32:34 crc kubenswrapper[4787]: E0219 20:32:34.895043 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:32:49 crc kubenswrapper[4787]: I0219 20:32:49.892412 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:32:49 crc kubenswrapper[4787]: E0219 20:32:49.893499 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:33:03 crc kubenswrapper[4787]: I0219 20:33:03.892420 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:33:03 crc kubenswrapper[4787]: E0219 20:33:03.893244 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:33:15 crc kubenswrapper[4787]: I0219 20:33:15.892378 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:33:15 crc kubenswrapper[4787]: E0219 20:33:15.893233 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.663184 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 20:33:17 crc kubenswrapper[4787]: E0219 20:33:17.664553 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3ff382-2124-41ae-a329-b88f9d915a3a" containerName="collect-profiles" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.664571 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3ff382-2124-41ae-a329-b88f9d915a3a" containerName="collect-profiles" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.664882 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3ff382-2124-41ae-a329-b88f9d915a3a" containerName="collect-profiles" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.665854 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.668577 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.669451 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4xfxp" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.672556 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.681108 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.689785 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.830632 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.830934 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-config-data\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831054 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831175 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831302 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831486 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8kgj\" (UniqueName: \"kubernetes.io/projected/3cd05e88-76fc-4a10-bc71-426177032c9f-kube-api-access-h8kgj\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831712 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831845 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.831991 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.933732 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.933838 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8kgj\" (UniqueName: \"kubernetes.io/projected/3cd05e88-76fc-4a10-bc71-426177032c9f-kube-api-access-h8kgj\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.933901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.933928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.933973 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.934049 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.934077 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-config-data\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.934094 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.934216 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.934674 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.934901 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.935229 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.935436 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-config-data\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.939369 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.942631 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.943092 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.946634 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.955348 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8kgj\" (UniqueName: \"kubernetes.io/projected/3cd05e88-76fc-4a10-bc71-426177032c9f-kube-api-access-h8kgj\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:17 crc kubenswrapper[4787]: I0219 20:33:17.973551 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " pod="openstack/tempest-tests-tempest" Feb 19 20:33:18 crc kubenswrapper[4787]: I0219 20:33:18.002424 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 20:33:18 crc kubenswrapper[4787]: I0219 20:33:18.591804 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 20:33:18 crc kubenswrapper[4787]: I0219 20:33:18.989482 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3cd05e88-76fc-4a10-bc71-426177032c9f","Type":"ContainerStarted","Data":"0a39cb75a1a02358210fecfe801a1581acfe973902bda63852baf376d39bddff"} Feb 19 20:33:30 crc kubenswrapper[4787]: I0219 20:33:30.894601 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:33:30 crc kubenswrapper[4787]: E0219 20:33:30.896413 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:33:43 crc kubenswrapper[4787]: I0219 20:33:43.892700 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:33:43 crc kubenswrapper[4787]: E0219 20:33:43.893648 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.061497 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-trscg"] Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.065141 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.100156 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trscg"] Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.266998 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-catalog-content\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.270514 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-utilities\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.270714 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czj8\" (UniqueName: \"kubernetes.io/projected/92dfbd60-ea6f-49ac-8eae-168398489701-kube-api-access-4czj8\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.375097 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-catalog-content\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.375591 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-catalog-content\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.376082 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-utilities\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.376287 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4czj8\" (UniqueName: \"kubernetes.io/projected/92dfbd60-ea6f-49ac-8eae-168398489701-kube-api-access-4czj8\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.376564 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-utilities\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.402312 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4czj8\" (UniqueName: \"kubernetes.io/projected/92dfbd60-ea6f-49ac-8eae-168398489701-kube-api-access-4czj8\") pod \"redhat-marketplace-trscg\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:44 crc kubenswrapper[4787]: I0219 20:33:44.424422 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.259821 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hgzbh"] Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.263902 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.270701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgzbh"] Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.331844 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-utilities\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.332025 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-catalog-content\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.332302 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmxq\" (UniqueName: \"kubernetes.io/projected/96f9ff40-f037-4683-9add-75ec13cb9155-kube-api-access-rjmxq\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.434439 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-utilities\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.434543 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-catalog-content\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.434714 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjmxq\" (UniqueName: \"kubernetes.io/projected/96f9ff40-f037-4683-9add-75ec13cb9155-kube-api-access-rjmxq\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.437145 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-catalog-content\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.437487 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-utilities\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.841465 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjmxq\" (UniqueName: \"kubernetes.io/projected/96f9ff40-f037-4683-9add-75ec13cb9155-kube-api-access-rjmxq\") pod \"community-operators-hgzbh\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:46 crc kubenswrapper[4787]: I0219 20:33:46.895469 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:33:55 crc kubenswrapper[4787]: E0219 20:33:55.675245 4787 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 19 20:33:55 crc kubenswrapper[4787]: E0219 20:33:55.680622 4787 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8kgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(3cd05e88-76fc-4a10-bc71-426177032c9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 20:33:55 crc kubenswrapper[4787]: E0219 20:33:55.682456 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="3cd05e88-76fc-4a10-bc71-426177032c9f" Feb 19 20:33:55 crc kubenswrapper[4787]: I0219 20:33:55.892236 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:33:55 crc kubenswrapper[4787]: E0219 20:33:55.892926 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.214019 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trscg"] Feb 19 20:33:56 crc kubenswrapper[4787]: W0219 20:33:56.311847 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f9ff40_f037_4683_9add_75ec13cb9155.slice/crio-7af47f9435aa0b978d89ee448ff8a8776f3e669506e29f1a0d527600ba3316a9 WatchSource:0}: Error finding container 7af47f9435aa0b978d89ee448ff8a8776f3e669506e29f1a0d527600ba3316a9: Status 404 returned error can't find the container with id 7af47f9435aa0b978d89ee448ff8a8776f3e669506e29f1a0d527600ba3316a9 Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.320371 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgzbh"] Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.667031 4787 generic.go:334] "Generic (PLEG): container finished" podID="92dfbd60-ea6f-49ac-8eae-168398489701" containerID="79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40" exitCode=0 Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.667162 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerDied","Data":"79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40"} Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.667423 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerStarted","Data":"e0c5617d7b01f8767c6a5b0d1f0d01b68c0e58a5176e95824011d090c8e9911a"} Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.669125 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.669446 4787 generic.go:334] "Generic (PLEG): container finished" podID="96f9ff40-f037-4683-9add-75ec13cb9155" containerID="e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3" exitCode=0 Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.669485 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerDied","Data":"e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3"} Feb 19 20:33:56 crc kubenswrapper[4787]: I0219 20:33:56.669517 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerStarted","Data":"7af47f9435aa0b978d89ee448ff8a8776f3e669506e29f1a0d527600ba3316a9"} Feb 19 20:33:56 crc kubenswrapper[4787]: E0219 20:33:56.670748 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="3cd05e88-76fc-4a10-bc71-426177032c9f" Feb 19 20:33:58 crc kubenswrapper[4787]: I0219 20:33:58.693198 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerStarted","Data":"92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda"} Feb 19 20:33:58 crc kubenswrapper[4787]: I0219 20:33:58.695545 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerStarted","Data":"c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16"} Feb 19 20:33:59 crc kubenswrapper[4787]: I0219 20:33:59.706424 4787 generic.go:334] "Generic (PLEG): container finished" podID="92dfbd60-ea6f-49ac-8eae-168398489701" containerID="92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda" exitCode=0 Feb 19 20:33:59 crc kubenswrapper[4787]: I0219 20:33:59.706477 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerDied","Data":"92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda"} Feb 19 20:34:00 crc kubenswrapper[4787]: I0219 20:34:00.728680 4787 generic.go:334] "Generic (PLEG): container finished" podID="96f9ff40-f037-4683-9add-75ec13cb9155" containerID="c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16" exitCode=0 Feb 19 20:34:00 crc kubenswrapper[4787]: I0219 20:34:00.728734 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerDied","Data":"c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16"} Feb 19 20:34:01 crc kubenswrapper[4787]: I0219 20:34:01.743051 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerStarted","Data":"cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e"} Feb 19 20:34:01 crc kubenswrapper[4787]: I0219 20:34:01.745718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerStarted","Data":"8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c"} Feb 19 20:34:01 crc kubenswrapper[4787]: I0219 20:34:01.766257 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-trscg" podStartSLOduration=14.026655493 podStartE2EDuration="17.766235835s" podCreationTimestamp="2026-02-19 20:33:44 +0000 UTC" firstStartedPulling="2026-02-19 20:33:56.668768751 +0000 UTC m=+4504.459434733" lastFinishedPulling="2026-02-19 20:34:00.408349133 +0000 UTC m=+4508.199015075" observedRunningTime="2026-02-19 20:34:01.762666094 +0000 UTC m=+4509.553332036" watchObservedRunningTime="2026-02-19 20:34:01.766235835 +0000 UTC m=+4509.556901767" Feb 19 20:34:01 crc kubenswrapper[4787]: I0219 20:34:01.792683 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hgzbh" podStartSLOduration=11.316373188 podStartE2EDuration="15.792663866s" podCreationTimestamp="2026-02-19 20:33:46 +0000 UTC" firstStartedPulling="2026-02-19 20:33:56.671441587 +0000 UTC m=+4504.462107529" lastFinishedPulling="2026-02-19 20:34:01.147732265 +0000 UTC m=+4508.938398207" observedRunningTime="2026-02-19 20:34:01.78050394 +0000 UTC m=+4509.571169892" watchObservedRunningTime="2026-02-19 20:34:01.792663866 +0000 UTC m=+4509.583329808" Feb 19 20:34:04 crc kubenswrapper[4787]: I0219 20:34:04.425777 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:34:04 crc kubenswrapper[4787]: I0219 20:34:04.426393 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:34:04 crc kubenswrapper[4787]: I0219 20:34:04.484739 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:34:06 crc kubenswrapper[4787]: I0219 20:34:06.908112 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:34:06 crc kubenswrapper[4787]: I0219 20:34:06.908713 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:34:06 crc kubenswrapper[4787]: I0219 20:34:06.951957 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:34:07 crc kubenswrapper[4787]: I0219 20:34:07.873130 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:34:07 crc kubenswrapper[4787]: I0219 20:34:07.892465 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:34:07 crc kubenswrapper[4787]: E0219 20:34:07.892977 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:34:07 crc kubenswrapper[4787]: I0219 20:34:07.932565 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgzbh"] Feb 19 20:34:09 crc kubenswrapper[4787]: I0219 20:34:09.835690 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hgzbh" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="registry-server" containerID="cri-o://8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c" gracePeriod=2 Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.785489 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.854619 4787 generic.go:334] "Generic (PLEG): container finished" podID="96f9ff40-f037-4683-9add-75ec13cb9155" containerID="8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c" exitCode=0 Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.854688 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerDied","Data":"8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c"} Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.854717 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgzbh" event={"ID":"96f9ff40-f037-4683-9add-75ec13cb9155","Type":"ContainerDied","Data":"7af47f9435aa0b978d89ee448ff8a8776f3e669506e29f1a0d527600ba3316a9"} Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.854753 4787 scope.go:117] "RemoveContainer" containerID="8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.855001 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgzbh" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.911834 4787 scope.go:117] "RemoveContainer" containerID="c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.933178 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-utilities\") pod \"96f9ff40-f037-4683-9add-75ec13cb9155\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.933434 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjmxq\" (UniqueName: \"kubernetes.io/projected/96f9ff40-f037-4683-9add-75ec13cb9155-kube-api-access-rjmxq\") pod \"96f9ff40-f037-4683-9add-75ec13cb9155\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.933502 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-catalog-content\") pod \"96f9ff40-f037-4683-9add-75ec13cb9155\" (UID: \"96f9ff40-f037-4683-9add-75ec13cb9155\") " Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.934516 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-utilities" (OuterVolumeSpecName: "utilities") pod "96f9ff40-f037-4683-9add-75ec13cb9155" (UID: "96f9ff40-f037-4683-9add-75ec13cb9155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.940401 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f9ff40-f037-4683-9add-75ec13cb9155-kube-api-access-rjmxq" (OuterVolumeSpecName: "kube-api-access-rjmxq") pod "96f9ff40-f037-4683-9add-75ec13cb9155" (UID: "96f9ff40-f037-4683-9add-75ec13cb9155"). InnerVolumeSpecName "kube-api-access-rjmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.943525 4787 scope.go:117] "RemoveContainer" containerID="e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3" Feb 19 20:34:10 crc kubenswrapper[4787]: I0219 20:34:10.993829 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96f9ff40-f037-4683-9add-75ec13cb9155" (UID: "96f9ff40-f037-4683-9add-75ec13cb9155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.036460 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjmxq\" (UniqueName: \"kubernetes.io/projected/96f9ff40-f037-4683-9add-75ec13cb9155-kube-api-access-rjmxq\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.036500 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.036511 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f9ff40-f037-4683-9add-75ec13cb9155-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.046794 4787 scope.go:117] "RemoveContainer" containerID="8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c" Feb 19 20:34:11 crc kubenswrapper[4787]: E0219 20:34:11.047139 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c\": container with ID starting with 8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c not found: ID does not exist" containerID="8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.047176 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c"} err="failed to get container status \"8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c\": rpc error: code = NotFound desc = could not find container \"8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c\": container with ID starting with 8d23b42609aa3ce5474aef669ec8544fc0f604a4c2d281184dfe3777586edf1c not found: ID does not exist" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.047196 4787 scope.go:117] "RemoveContainer" containerID="c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16" Feb 19 20:34:11 crc kubenswrapper[4787]: E0219 20:34:11.047518 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16\": container with ID starting with c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16 not found: ID does not exist" containerID="c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.047572 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16"} err="failed to get container status \"c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16\": rpc error: code = NotFound desc = could not find container \"c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16\": container with ID starting with c284cd73576fb452cd0ebd904e30f32a1a6c1a4825feb886015e0b76071b3c16 not found: ID does not exist" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.047600 4787 scope.go:117] "RemoveContainer" containerID="e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3" Feb 19 20:34:11 crc kubenswrapper[4787]: E0219 20:34:11.047916 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3\": container with ID starting with e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3 not found: ID does not exist" containerID="e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.047942 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3"} err="failed to get container status \"e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3\": rpc error: code = NotFound desc = could not find container \"e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3\": container with ID starting with e7c6e261e061594a637b830f007a9a2cf632860f44695ea95c61466d2b7991f3 not found: ID does not exist" Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.193594 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgzbh"] Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.205589 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hgzbh"] Feb 19 20:34:11 crc kubenswrapper[4787]: I0219 20:34:11.346804 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 20:34:12 crc kubenswrapper[4787]: I0219 20:34:12.904116 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" path="/var/lib/kubelet/pods/96f9ff40-f037-4683-9add-75ec13cb9155/volumes" Feb 19 20:34:13 crc kubenswrapper[4787]: I0219 20:34:13.892708 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3cd05e88-76fc-4a10-bc71-426177032c9f","Type":"ContainerStarted","Data":"d3c26c2fa6f8727c26bf0d43059c68c7113b29d82048b6f6ad2828bbf7aece9e"} Feb 19 20:34:13 crc kubenswrapper[4787]: I0219 20:34:13.920515 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.173044389 podStartE2EDuration="57.920498875s" podCreationTimestamp="2026-02-19 20:33:16 +0000 UTC" firstStartedPulling="2026-02-19 20:33:18.597015382 +0000 UTC m=+4466.387681324" lastFinishedPulling="2026-02-19 20:34:11.344469878 +0000 UTC m=+4519.135135810" observedRunningTime="2026-02-19 20:34:13.918142588 +0000 UTC m=+4521.708808530" watchObservedRunningTime="2026-02-19 20:34:13.920498875 +0000 UTC m=+4521.711164817" Feb 19 20:34:14 crc kubenswrapper[4787]: I0219 20:34:14.472523 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:34:14 crc kubenswrapper[4787]: I0219 20:34:14.541473 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trscg"] Feb 19 20:34:14 crc kubenswrapper[4787]: I0219 20:34:14.901836 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-trscg" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="registry-server" containerID="cri-o://cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e" gracePeriod=2 Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.509179 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.653937 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-catalog-content\") pod \"92dfbd60-ea6f-49ac-8eae-168398489701\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.654088 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4czj8\" (UniqueName: \"kubernetes.io/projected/92dfbd60-ea6f-49ac-8eae-168398489701-kube-api-access-4czj8\") pod \"92dfbd60-ea6f-49ac-8eae-168398489701\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.654144 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-utilities\") pod \"92dfbd60-ea6f-49ac-8eae-168398489701\" (UID: \"92dfbd60-ea6f-49ac-8eae-168398489701\") " Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.655467 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-utilities" (OuterVolumeSpecName: "utilities") pod "92dfbd60-ea6f-49ac-8eae-168398489701" (UID: "92dfbd60-ea6f-49ac-8eae-168398489701"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.656690 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.660621 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbd60-ea6f-49ac-8eae-168398489701-kube-api-access-4czj8" (OuterVolumeSpecName: "kube-api-access-4czj8") pod "92dfbd60-ea6f-49ac-8eae-168398489701" (UID: "92dfbd60-ea6f-49ac-8eae-168398489701"). InnerVolumeSpecName "kube-api-access-4czj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.691386 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92dfbd60-ea6f-49ac-8eae-168398489701" (UID: "92dfbd60-ea6f-49ac-8eae-168398489701"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.758695 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4czj8\" (UniqueName: \"kubernetes.io/projected/92dfbd60-ea6f-49ac-8eae-168398489701-kube-api-access-4czj8\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.759043 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfbd60-ea6f-49ac-8eae-168398489701-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.914340 4787 generic.go:334] "Generic (PLEG): container finished" podID="92dfbd60-ea6f-49ac-8eae-168398489701" containerID="cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e" exitCode=0 Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.914383 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerDied","Data":"cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e"} Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.914410 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trscg" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.914432 4787 scope.go:117] "RemoveContainer" containerID="cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.914420 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trscg" event={"ID":"92dfbd60-ea6f-49ac-8eae-168398489701","Type":"ContainerDied","Data":"e0c5617d7b01f8767c6a5b0d1f0d01b68c0e58a5176e95824011d090c8e9911a"} Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.955560 4787 scope.go:117] "RemoveContainer" containerID="92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda" Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.955894 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trscg"] Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.967658 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-trscg"] Feb 19 20:34:15 crc kubenswrapper[4787]: I0219 20:34:15.992963 4787 scope.go:117] "RemoveContainer" containerID="79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.040753 4787 scope.go:117] "RemoveContainer" containerID="cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e" Feb 19 20:34:16 crc kubenswrapper[4787]: E0219 20:34:16.041107 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e\": container with ID starting with cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e not found: ID does not exist" containerID="cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.041145 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e"} err="failed to get container status \"cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e\": rpc error: code = NotFound desc = could not find container \"cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e\": container with ID starting with cc1ffc00ccc6ca014fc5781d290b9d0e2787b7fdbdda7aecb42f06572dfa6a6e not found: ID does not exist" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.041173 4787 scope.go:117] "RemoveContainer" containerID="92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda" Feb 19 20:34:16 crc kubenswrapper[4787]: E0219 20:34:16.041835 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda\": container with ID starting with 92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda not found: ID does not exist" containerID="92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.041867 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda"} err="failed to get container status \"92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda\": rpc error: code = NotFound desc = could not find container \"92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda\": container with ID starting with 92e2dab862f6d5d27be2b454b52f3c0ba7864e34962af9a0437de99d47a90cda not found: ID does not exist" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.041884 4787 scope.go:117] "RemoveContainer" containerID="79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40" Feb 19 20:34:16 crc kubenswrapper[4787]: E0219 20:34:16.042181 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40\": container with ID starting with 79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40 not found: ID does not exist" containerID="79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.042204 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40"} err="failed to get container status \"79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40\": rpc error: code = NotFound desc = could not find container \"79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40\": container with ID starting with 79ea349d46b1ab7564046b449e5c55611eea8aaa68d2fbe6f9aa9cd87a33fa40 not found: ID does not exist" Feb 19 20:34:16 crc kubenswrapper[4787]: I0219 20:34:16.903999 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" path="/var/lib/kubelet/pods/92dfbd60-ea6f-49ac-8eae-168398489701/volumes" Feb 19 20:34:20 crc kubenswrapper[4787]: I0219 20:34:20.892259 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:34:20 crc kubenswrapper[4787]: E0219 20:34:20.893103 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.219396 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bh7nk"] Feb 19 20:34:30 crc kubenswrapper[4787]: E0219 20:34:30.220409 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="extract-utilities" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220425 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="extract-utilities" Feb 19 20:34:30 crc kubenswrapper[4787]: E0219 20:34:30.220443 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="registry-server" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220451 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="registry-server" Feb 19 20:34:30 crc kubenswrapper[4787]: E0219 20:34:30.220488 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="extract-content" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220498 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="extract-content" Feb 19 20:34:30 crc kubenswrapper[4787]: E0219 20:34:30.220520 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="extract-utilities" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220529 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="extract-utilities" Feb 19 20:34:30 crc kubenswrapper[4787]: E0219 20:34:30.220547 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="extract-content" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220555 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="extract-content" Feb 19 20:34:30 crc kubenswrapper[4787]: E0219 20:34:30.220571 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="registry-server" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220579 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="registry-server" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220873 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfbd60-ea6f-49ac-8eae-168398489701" containerName="registry-server" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.220895 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f9ff40-f037-4683-9add-75ec13cb9155" containerName="registry-server" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.265592 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh7nk"] Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.265739 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.321182 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-utilities\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.321371 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-catalog-content\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.321465 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7znn\" (UniqueName: \"kubernetes.io/projected/3aea2922-b956-4ae3-a1ce-8e5378029fe5-kube-api-access-t7znn\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.423508 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-utilities\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.423737 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-catalog-content\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.423856 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7znn\" (UniqueName: \"kubernetes.io/projected/3aea2922-b956-4ae3-a1ce-8e5378029fe5-kube-api-access-t7znn\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.424070 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-utilities\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.424385 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-catalog-content\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.501535 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7znn\" (UniqueName: \"kubernetes.io/projected/3aea2922-b956-4ae3-a1ce-8e5378029fe5-kube-api-access-t7znn\") pod \"certified-operators-bh7nk\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:30 crc kubenswrapper[4787]: I0219 20:34:30.592930 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:31 crc kubenswrapper[4787]: I0219 20:34:31.130387 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bh7nk"] Feb 19 20:34:32 crc kubenswrapper[4787]: I0219 20:34:32.073548 4787 generic.go:334] "Generic (PLEG): container finished" podID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerID="b0d41bea4c12cc3f2af98cb55c678ed789864a05fd9f8c8e724ab1d5a5ae32c4" exitCode=0 Feb 19 20:34:32 crc kubenswrapper[4787]: I0219 20:34:32.073589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerDied","Data":"b0d41bea4c12cc3f2af98cb55c678ed789864a05fd9f8c8e724ab1d5a5ae32c4"} Feb 19 20:34:32 crc kubenswrapper[4787]: I0219 20:34:32.073632 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerStarted","Data":"7bc60dbd43fe6c03ff397c471bcc2aad451d186f6e0cac2ecbf4ff79d9e99202"} Feb 19 20:34:33 crc kubenswrapper[4787]: I0219 20:34:33.087415 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerStarted","Data":"937721911f1bff04aa4d8a7a46d7e0ea10d3090f5b5f2c7481dc562535162939"} Feb 19 20:34:35 crc kubenswrapper[4787]: I0219 20:34:35.107194 4787 generic.go:334] "Generic (PLEG): container finished" podID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerID="937721911f1bff04aa4d8a7a46d7e0ea10d3090f5b5f2c7481dc562535162939" exitCode=0 Feb 19 20:34:35 crc kubenswrapper[4787]: I0219 20:34:35.107298 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerDied","Data":"937721911f1bff04aa4d8a7a46d7e0ea10d3090f5b5f2c7481dc562535162939"} Feb 19 20:34:35 crc kubenswrapper[4787]: I0219 20:34:35.892525 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:34:35 crc kubenswrapper[4787]: E0219 20:34:35.893233 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:34:36 crc kubenswrapper[4787]: I0219 20:34:36.118795 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerStarted","Data":"b7abf6fbf22e42959a831388b4319b8fa85f432efa38722bdf49246e25928381"} Feb 19 20:34:40 crc kubenswrapper[4787]: I0219 20:34:40.593451 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:40 crc kubenswrapper[4787]: I0219 20:34:40.593898 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:41 crc kubenswrapper[4787]: I0219 20:34:41.652094 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bh7nk" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="registry-server" probeResult="failure" output=< Feb 19 20:34:41 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:34:41 crc kubenswrapper[4787]: > Feb 19 20:34:49 crc kubenswrapper[4787]: I0219 20:34:49.893213 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:34:49 crc kubenswrapper[4787]: E0219 20:34:49.894165 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:34:50 crc kubenswrapper[4787]: I0219 20:34:50.652734 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:50 crc kubenswrapper[4787]: I0219 20:34:50.692677 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bh7nk" podStartSLOduration=17.267545141 podStartE2EDuration="20.692657936s" podCreationTimestamp="2026-02-19 20:34:30 +0000 UTC" firstStartedPulling="2026-02-19 20:34:32.075049072 +0000 UTC m=+4539.865715014" lastFinishedPulling="2026-02-19 20:34:35.500161857 +0000 UTC m=+4543.290827809" observedRunningTime="2026-02-19 20:34:36.151407347 +0000 UTC m=+4543.942073309" watchObservedRunningTime="2026-02-19 20:34:50.692657936 +0000 UTC m=+4558.483323878" Feb 19 20:34:50 crc kubenswrapper[4787]: I0219 20:34:50.705180 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:52 crc kubenswrapper[4787]: I0219 20:34:52.263407 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh7nk"] Feb 19 20:34:52 crc kubenswrapper[4787]: I0219 20:34:52.691315 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bh7nk" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="registry-server" containerID="cri-o://b7abf6fbf22e42959a831388b4319b8fa85f432efa38722bdf49246e25928381" gracePeriod=2 Feb 19 20:34:52 crc kubenswrapper[4787]: E0219 20:34:52.986435 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aea2922_b956_4ae3_a1ce_8e5378029fe5.slice/crio-conmon-b7abf6fbf22e42959a831388b4319b8fa85f432efa38722bdf49246e25928381.scope\": RecentStats: unable to find data in memory cache]" Feb 19 20:34:53 crc kubenswrapper[4787]: I0219 20:34:53.702398 4787 generic.go:334] "Generic (PLEG): container finished" podID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerID="b7abf6fbf22e42959a831388b4319b8fa85f432efa38722bdf49246e25928381" exitCode=0 Feb 19 20:34:53 crc kubenswrapper[4787]: I0219 20:34:53.702464 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerDied","Data":"b7abf6fbf22e42959a831388b4319b8fa85f432efa38722bdf49246e25928381"} Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.408535 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.466806 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-catalog-content\") pod \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.467052 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7znn\" (UniqueName: \"kubernetes.io/projected/3aea2922-b956-4ae3-a1ce-8e5378029fe5-kube-api-access-t7znn\") pod \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.467241 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-utilities\") pod \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\" (UID: \"3aea2922-b956-4ae3-a1ce-8e5378029fe5\") " Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.468835 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-utilities" (OuterVolumeSpecName: "utilities") pod "3aea2922-b956-4ae3-a1ce-8e5378029fe5" (UID: "3aea2922-b956-4ae3-a1ce-8e5378029fe5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.492951 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aea2922-b956-4ae3-a1ce-8e5378029fe5-kube-api-access-t7znn" (OuterVolumeSpecName: "kube-api-access-t7znn") pod "3aea2922-b956-4ae3-a1ce-8e5378029fe5" (UID: "3aea2922-b956-4ae3-a1ce-8e5378029fe5"). InnerVolumeSpecName "kube-api-access-t7znn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.570698 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7znn\" (UniqueName: \"kubernetes.io/projected/3aea2922-b956-4ae3-a1ce-8e5378029fe5-kube-api-access-t7znn\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.570732 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.632973 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aea2922-b956-4ae3-a1ce-8e5378029fe5" (UID: "3aea2922-b956-4ae3-a1ce-8e5378029fe5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.672711 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea2922-b956-4ae3-a1ce-8e5378029fe5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.721002 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bh7nk" event={"ID":"3aea2922-b956-4ae3-a1ce-8e5378029fe5","Type":"ContainerDied","Data":"7bc60dbd43fe6c03ff397c471bcc2aad451d186f6e0cac2ecbf4ff79d9e99202"} Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.721069 4787 scope.go:117] "RemoveContainer" containerID="b7abf6fbf22e42959a831388b4319b8fa85f432efa38722bdf49246e25928381" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.721083 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bh7nk" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.774003 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bh7nk"] Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.808909 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bh7nk"] Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.813097 4787 scope.go:117] "RemoveContainer" containerID="937721911f1bff04aa4d8a7a46d7e0ea10d3090f5b5f2c7481dc562535162939" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.900317 4787 scope.go:117] "RemoveContainer" containerID="b0d41bea4c12cc3f2af98cb55c678ed789864a05fd9f8c8e724ab1d5a5ae32c4" Feb 19 20:34:54 crc kubenswrapper[4787]: I0219 20:34:54.907961 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" path="/var/lib/kubelet/pods/3aea2922-b956-4ae3-a1ce-8e5378029fe5/volumes" Feb 19 20:35:02 crc kubenswrapper[4787]: I0219 20:35:02.895399 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:35:02 crc kubenswrapper[4787]: E0219 20:35:02.897116 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:35:16 crc kubenswrapper[4787]: I0219 20:35:16.893378 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:35:16 crc kubenswrapper[4787]: E0219 20:35:16.894291 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:35:28 crc kubenswrapper[4787]: I0219 20:35:28.904746 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:35:28 crc kubenswrapper[4787]: E0219 20:35:28.910949 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:35:40 crc kubenswrapper[4787]: I0219 20:35:40.894901 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:35:42 crc kubenswrapper[4787]: I0219 20:35:42.335624 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"989356fdb3d601c723661a759f744160fea15ac5712a087d10ed1720f45de4af"} Feb 19 20:35:45 crc kubenswrapper[4787]: I0219 20:35:45.599179 4787 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-6sjw5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:35:45 crc kubenswrapper[4787]: I0219 20:35:45.611449 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" podUID="ace5ef3f-b2ed-4d41-a085-4c662e70061b" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:03 crc kubenswrapper[4787]: I0219 20:37:03.955617 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:04 crc kubenswrapper[4787]: I0219 20:37:03.955619 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:04 crc kubenswrapper[4787]: I0219 20:37:03.959473 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:04 crc kubenswrapper[4787]: I0219 20:37:03.959473 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.027882 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.027882 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.031286 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.031266 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.360066 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.360139 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.360446 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.360483 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.393033 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.393111 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.393160 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:06 crc kubenswrapper[4787]: I0219 20:37:06.393104 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.011738 4787 patch_prober.go:28] interesting pod/monitoring-plugin-ffdd67d56-c5b8z container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.020828 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" podUID="9cc2e2fc-ef4a-429e-a313-00db077b7feb" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.609955 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-mvnk9" podUID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.934344 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.934420 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.934350 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:10 crc kubenswrapper[4787]: I0219 20:37:10.934483 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:11 crc kubenswrapper[4787]: I0219 20:37:11.149907 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" podUID="f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:11 crc kubenswrapper[4787]: I0219 20:37:11.149920 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-68jqv" podUID="f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:12 crc kubenswrapper[4787]: I0219 20:37:12.141622 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:12 crc kubenswrapper[4787]: I0219 20:37:12.141630 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:12 crc kubenswrapper[4787]: I0219 20:37:12.292396 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-pn7bv" podUID="af7a16ca-ed17-45d6-aa9e-f2552dc92af7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:12 crc kubenswrapper[4787]: I0219 20:37:12.292397 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-pn7bv" podUID="af7a16ca-ed17-45d6-aa9e-f2552dc92af7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.138492 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.138624 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.138661 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.139011 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.327406 4787 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-dkzcx container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.332671 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" podUID="2c6f8721-8336-47fa-b27a-6c897006b94e" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.546264 4787 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-htw48 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.546630 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" podUID="9d8ca7ab-f667-423c-926e-a9e2cfc10c1b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.787051 4787 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nj9wt container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded" start-of-body= Feb 19 20:37:13 crc kubenswrapper[4787]: I0219 20:37:13.787130 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" podUID="ca0d4193-66a0-48c4-8932-8827eaac2c2b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.016770 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.016799 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.016828 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.016866 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.016900 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.017015 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.017085 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.017015 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.318501 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-tjbh7 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.318564 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podUID="47705ce6-ef81-47a2-bcd3-a10b7bb9317a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.337178 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.337230 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606645 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-2gfvj" podUID="426998bc-15ae-476b-93e7-04f7591afce3" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606661 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pgqnv" podUID="5b15ff10-c8f6-43ca-9538-e781e30d1842" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606681 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-6mcds" podUID="634b3e3d-f43d-4d5c-996c-02c5277282ef" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606704 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-2gfvj" podUID="426998bc-15ae-476b-93e7-04f7591afce3" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606731 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-pgqnv" podUID="5b15ff10-c8f6-43ca-9538-e781e30d1842" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606733 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-6mcds" podUID="634b3e3d-f43d-4d5c-996c-02c5277282ef" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606749 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-jvnpx" podUID="1f861785-2aa2-4b3b-aca7-90a83d68bcd8" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.606788 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-jvnpx" podUID="1f861785-2aa2-4b3b-aca7-90a83d68bcd8" containerName="registry-server" probeResult="failure" output=< Feb 19 20:37:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:37:14 crc kubenswrapper[4787]: > Feb 19 20:37:14 crc kubenswrapper[4787]: I0219 20:37:14.951790 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" podUID="06aa3b20-a2ee-4c2b-bda6-0e876910a26c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.332787 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.332807 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.332522 4787 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-fd6kc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.337165 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" podUID="1f465edc-b94b-4a9d-9f9c-1540bb933c8d" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.14:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.373138 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" podUID="78a45da3-619d-4cc4-a819-6dad66a61737" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.373907 4787 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-fd6kc container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.374018 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-tjbh7 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.374042 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.374087 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.374101 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podUID="47705ce6-ef81-47a2-bcd3-a10b7bb9317a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.374227 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd6kc" podUID="1f465edc-b94b-4a9d-9f9c-1540bb933c8d" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.14:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.382343 4787 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-6sjw5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.382469 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" podUID="ace5ef3f-b2ed-4d41-a085-4c662e70061b" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.545311 4787 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-2sgql container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.546392 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" podUID="fe0d7abc-7a42-4ba4-8403-c6b9dd202217" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.665768 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" podUID="36c1d908-879d-4d98-bd71-06b5c6e802e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.966531 4787 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:15 crc kubenswrapper[4787]: I0219 20:37:15.966598 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.023520 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.023888 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.023591 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.024104 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.114973 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.115041 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.115121 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.115142 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.152225 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.201121 4787 patch_prober.go:28] interesting pod/thanos-querier-8467586bf9-4p78p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.201332 4787 patch_prober.go:28] interesting pod/thanos-querier-8467586bf9-4p78p container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.201524 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" podUID="08529c3b-a268-4673-b175-8271ec28811d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.201415 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" podUID="08529c3b-a268-4673-b175-8271ec28811d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.359964 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.360038 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.360213 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.360238 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.377482 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwwj8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.377787 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podUID="221a034a-d231-46b6-b0ea-624788b21fea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.377549 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwwj8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.377998 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podUID="221a034a-d231-46b6-b0ea-624788b21fea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.391749 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.391792 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.391809 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.391834 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.891997 4787 patch_prober.go:28] interesting pod/console-967968ff4-947nc container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.896363 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-967968ff4-947nc" podUID="e826d3df-6eed-485f-b12d-d1fdee12d975" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.943777 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.943815 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.943822 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.943848 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.962670 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.962759 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.970586 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"c03225a17bff3615ce528e5bf2ba637e9189d79a6d3e090fefcecf0d21d237b0"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 19 20:37:16 crc kubenswrapper[4787]: I0219 20:37:16.970707 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" containerID="cri-o://c03225a17bff3615ce528e5bf2ba637e9189d79a6d3e090fefcecf0d21d237b0" gracePeriod=30 Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.009832 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.010684 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.141515 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.141578 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.228584 4787 patch_prober.go:28] interesting pod/controller-manager-85564cdc67-nsqlz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.228617 4787 patch_prober.go:28] interesting pod/controller-manager-85564cdc67-nsqlz container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.228679 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" podUID="6984a09d-3652-4db8-bae6-874ba82dd3a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.228714 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" podUID="6984a09d-3652-4db8-bae6-874ba82dd3a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.241865 4787 patch_prober.go:28] interesting pod/route-controller-manager-6644687b85-9nz2x container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.241915 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" podUID="b3ac3bda-4e18-4352-b2bf-ee28a2d059ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.241974 4787 patch_prober.go:28] interesting pod/route-controller-manager-6644687b85-9nz2x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.241987 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" podUID="b3ac3bda-4e18-4352-b2bf-ee28a2d059ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:18 crc kubenswrapper[4787]: E0219 20:37:18.925003 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:37:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:37:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:37:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T20:37:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.936303 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 20:37:18 crc kubenswrapper[4787]: I0219 20:37:18.936360 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.082319 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" podUID="0fdbbc7b-81f4-401b-8df0-59417ab3ec18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.082345 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" podUID="b7deddaa-9e2a-4e95-8dce-fb6b70a0523e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.123830 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-hdncj" podUID="39e4daf9-e2ed-4325-9f5f-27b2b5662945" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.164852 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tlx7r" podUID="f58c3336-8153-4c54-95c6-2cf2f23cbe57" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.205861 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gdqt5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.205866 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gdqt5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.205944 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" podUID="34c814cb-c6f7-48b1-8153-e532e5f71bc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.206025 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" podUID="34c814cb-c6f7-48b1-8153-e532e5f71bc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.251869 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5k7fl" podUID="68b08cc9-812d-4199-8654-9a5a3f2a855f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.308151 4787 trace.go:236] Trace[534724776]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (19-Feb-2026 20:37:14.545) (total time: 4759ms): Feb 19 20:37:19 crc kubenswrapper[4787]: Trace[534724776]: [4.759453859s] [4.759453859s] END Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.308168 4787 trace.go:236] Trace[444516640]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (19-Feb-2026 20:37:14.241) (total time: 5063ms): Feb 19 20:37:19 crc kubenswrapper[4787]: Trace[444516640]: [5.063956933s] [5.063956933s] END Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.310890 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" podUID="00ef1a7b-bf28-4126-b60f-c79af3fde4da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.319455 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-tjbh7 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.319522 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podUID="47705ce6-ef81-47a2-bcd3-a10b7bb9317a" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.319591 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-tjbh7 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.319639 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podUID="47705ce6-ef81-47a2-bcd3-a10b7bb9317a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.337073 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.337115 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.337180 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.337128 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.392858 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" podUID="4a47b4c3-d7f4-4194-bd9c-fdef06d3450d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.392861 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-whv4k" podUID="285a6f28-aeac-4b0d-816a-2eb05abe7ef3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.444818 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rv4sk" podUID="c0ee76ae-6d9e-4470-8f77-27d7d231bb7d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.527089 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="e7a69404-5a15-40e5-bd22-faa4493739fa" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.3:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.530816 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" podUID="9cdc475f-0036-4e63-8fd4-c1e44537668d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.593871 4787 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qzrkf container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.593941 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" podUID="b7424e23-a3c1-4e60-87c8-db2ad78ba2a9" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.612849 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" podUID="58f1bc3e-9217-48c3-80af-e4979969b991" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.654988 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podUID="46df12dd-6fd4-4508-8141-ef1cc6551d79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.695813 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" podUID="a752c75e-1e1e-4d78-b82a-95f8df84523f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.696067 4787 patch_prober.go:28] interesting pod/metrics-server-56f6f44749-gt422 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.696115 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" podUID="2eb464be-e241-48b6-8e55-47bea187dcb4" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.696141 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" podUID="6e92b566-c5a6-40e8-be75-5de416385888" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.737470 4787 patch_prober.go:28] interesting pod/oauth-openshift-57bcd9fbb-4clxf container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.737528 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" podUID="4b08776c-d5da-4eba-b7bf-9a6e0c56c181" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.737561 4787 patch_prober.go:28] interesting pod/oauth-openshift-57bcd9fbb-4clxf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.737630 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" podUID="4b08776c-d5da-4eba-b7bf-9a6e0c56c181" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.778780 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" podUID="26a6b075-ab07-4508-86f7-2af4934e078a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.778816 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podUID="0944c0f9-ef54-46cc-be37-a59477312705" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.873843 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" podUID="a2428ab4-02d6-4400-820b-995a002fb38c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.914795 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" podUID="a2428ab4-02d6-4400-820b-995a002fb38c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.914828 4787 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.914894 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.956176 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-s78xt" podUID="bfb600b1-766e-4df0-9f20-a5b4ad0ed684" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.956299 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" podUID="c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:19 crc kubenswrapper[4787]: I0219 20:37:19.998796 4787 patch_prober.go:28] interesting pod/monitoring-plugin-ffdd67d56-c5b8z container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:19.999136 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-ffdd67d56-c5b8z" podUID="9cc2e2fc-ef4a-429e-a313-00db077b7feb" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.014668 4787 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qz8z6 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.014924 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" podUID="a0538112-d98b-49ff-9618-654279d0ef7f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.260184 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.260473 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.698119 4787 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55fc987df5-9spp8 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.698232 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" podUID="aae80a85-0afc-42a9-817a-57570462dee1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862168 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-mvnk9" podUID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862212 4787 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9n572 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862262 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-9n572" podUID="963c18fc-03cd-46a4-9130-3908e897870e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862252 4787 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55fc987df5-9spp8 container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862359 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" podUID="aae80a85-0afc-42a9-817a-57570462dee1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862483 4787 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-9n572 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.862525 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-9n572" podUID="963c18fc-03cd-46a4-9130-3908e897870e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.985822 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" podUID="d56f4bb8-5768-45e0-9cf9-6d759249fe69" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.985837 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-9sqnr" podUID="56aef487-656a-47b1-b3b4-d9fe6f62b1f4" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.985928 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-dq2f5" podUID="d56f4bb8-5768-45e0-9cf9-6d759249fe69" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.985961 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-mvnk9" podUID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.985974 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-mvnk9" podUID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.986004 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-9sqnr" podUID="56aef487-656a-47b1-b3b4-d9fe6f62b1f4" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.986021 4787 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-tfckq container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:20 crc kubenswrapper[4787]: I0219 20:37:20.986046 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-tfckq" podUID="a5709e38-dd1f-4a2a-ba8f-4da0055aaf57" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.23:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:21 crc kubenswrapper[4787]: I0219 20:37:21.137970 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-5qk4m" podUID="94194c14-c7cd-4b05-bda1-74ea911cd6cf" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 19 20:37:21 crc kubenswrapper[4787]: I0219 20:37:21.197856 4787 patch_prober.go:28] interesting pod/thanos-querier-8467586bf9-4p78p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:21 crc kubenswrapper[4787]: I0219 20:37:21.197944 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" podUID="08529c3b-a268-4673-b175-8271ec28811d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:21 crc kubenswrapper[4787]: I0219 20:37:21.933466 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 20:37:21 crc kubenswrapper[4787]: I0219 20:37:21.933828 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 20:37:22 crc kubenswrapper[4787]: I0219 20:37:22.140463 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:22 crc kubenswrapper[4787]: I0219 20:37:22.140907 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:22 crc kubenswrapper[4787]: I0219 20:37:22.149855 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 20:37:22 crc kubenswrapper[4787]: I0219 20:37:22.236986 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-pn7bv" podUID="af7a16ca-ed17-45d6-aa9e-f2552dc92af7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:22 crc kubenswrapper[4787]: I0219 20:37:22.237011 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-pn7bv" podUID="af7a16ca-ed17-45d6-aa9e-f2552dc92af7" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.138279 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.139557 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.139570 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.140098 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.142346 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-zrns9" podUID="de424d05-0977-4dac-8bd9-01c37cf49d4e" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.142347 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-zrns9" podUID="de424d05-0977-4dac-8bd9-01c37cf49d4e" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.142710 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.324211 4787 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-dkzcx container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.324286 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-dkzcx" podUID="2c6f8721-8336-47fa-b27a-6c897006b94e" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.546071 4787 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-htw48 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.546544 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-htw48" podUID="9d8ca7ab-f667-423c-926e-a9e2cfc10c1b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.787321 4787 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nj9wt container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.787397 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nj9wt" podUID="ca0d4193-66a0-48c4-8932-8827eaac2c2b" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.951684 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.951731 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.951745 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.951808 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.951897 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.951956 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.955843 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"f68a0763fbd50bbb371afa97cfce2cd209396da986095a15dc90fecd445432bd"} pod="openshift-console-operator/console-operator-58897d9998-vkbfk" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 19 20:37:23 crc kubenswrapper[4787]: I0219 20:37:23.957940 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" containerID="cri-o://f68a0763fbd50bbb371afa97cfce2cd209396da986095a15dc90fecd445432bd" gracePeriod=30 Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.176002 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.176113 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.319216 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-tjbh7 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.319280 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podUID="47705ce6-ef81-47a2-bcd3-a10b7bb9317a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.336936 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.336999 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.399834 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.399899 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.399874 4787 patch_prober.go:28] interesting pod/downloads-7954f5f757-6cngl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.400156 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6cngl" podUID="aa6ee378-233f-4cbf-b43c-9569c6a41643" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.478310 4787 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.478370 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="74b9f2e5-3b9f-4af9-990f-147a1c6f8943" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.589263 4787 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-qzrkf container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.589558 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qzrkf" podUID="b7424e23-a3c1-4e60-87c8-db2ad78ba2a9" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.621784 4787 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.75:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.621857 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.75:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.822221 4787 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.822296 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="46ee23a2-1b37-42e7-899f-5c1c70a6755b" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.950742 4787 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.950770 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" podUID="06aa3b20-a2ee-4c2b-bda6-0e876910a26c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.950796 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.991873 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-j59h4" podUID="06aa3b20-a2ee-4c2b-bda6-0e876910a26c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.991962 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.992005 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.992761 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 20:37:24 crc kubenswrapper[4787]: I0219 20:37:24.992821 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.013599 4787 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qz8z6 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.013670 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-qz8z6" podUID="a0538112-d98b-49ff-9618-654279d0ef7f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.31:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.208231 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" event={"ID":"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4","Type":"ContainerDied","Data":"fd01798008a5a1b57b4d4e9e87558807cb3fb96e27b4e1013ea47642c1a67bbc"} Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.209644 4787 generic.go:334] "Generic (PLEG): container finished" podID="c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4" containerID="fd01798008a5a1b57b4d4e9e87558807cb3fb96e27b4e1013ea47642c1a67bbc" exitCode=1 Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.216099 4787 scope.go:117] "RemoveContainer" containerID="fd01798008a5a1b57b4d4e9e87558807cb3fb96e27b4e1013ea47642c1a67bbc" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.261169 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.261274 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.261700 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.353747 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" podUID="78a45da3-619d-4cc4-a819-6dad66a61737" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.353774 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" podUID="78a45da3-619d-4cc4-a819-6dad66a61737" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.382828 4787 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-6sjw5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.382894 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-6sjw5" podUID="ace5ef3f-b2ed-4d41-a085-4c662e70061b" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.86:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.544984 4787 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-2sgql container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.545345 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-2sgql" podUID="fe0d7abc-7a42-4ba4-8403-c6b9dd202217" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.707762 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" podUID="36c1d908-879d-4d98-bd71-06b5c6e802e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.707762 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-b9547c8c7-s8zzh" podUID="36c1d908-879d-4d98-bd71-06b5c6e802e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.966684 4787 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:25 crc kubenswrapper[4787]: I0219 20:37:25.967026 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.023628 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.023692 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.023738 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.023691 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.024957 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.025067 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.025217 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"493c53de7029af3783afa793495b5484e2379540a76990fdaa0b08a849a712f3"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.025249 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" containerID="cri-o://493c53de7029af3783afa793495b5484e2379540a76990fdaa0b08a849a712f3" gracePeriod=30 Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.115860 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.115860 4787 patch_prober.go:28] interesting pod/router-default-5444994796-29dzb container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.115927 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.115941 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-29dzb" podUID="825f12a8-ed8f-4a13-910c-53801339ec23" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.140684 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.143796 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.143839 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-6mcds" podUID="634b3e3d-f43d-4d5c-996c-02c5277282ef" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.144089 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-2gfvj" podUID="426998bc-15ae-476b-93e7-04f7591afce3" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.146039 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-2gfvj" podUID="426998bc-15ae-476b-93e7-04f7591afce3" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.146285 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-6mcds" podUID="634b3e3d-f43d-4d5c-996c-02c5277282ef" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.198125 4787 patch_prober.go:28] interesting pod/thanos-querier-8467586bf9-4p78p container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.198178 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-8467586bf9-4p78p" podUID="08529c3b-a268-4673-b175-8271ec28811d" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.359345 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.359459 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.359521 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.359365 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.359624 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.359814 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.362866 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"21850b0dc91b2907c8706d530d3730da0a372cfda316ac9a3b71c93b9fb6a731"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.362915 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" containerID="cri-o://21850b0dc91b2907c8706d530d3730da0a372cfda316ac9a3b71c93b9fb6a731" gracePeriod=30 Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419423 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwwj8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419470 4787 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwwj8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419515 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podUID="221a034a-d231-46b6-b0ea-624788b21fea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419530 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwwj8" podUID="221a034a-d231-46b6-b0ea-624788b21fea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419513 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419581 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419549 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419666 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419685 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.419739 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.421256 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"61f2f4117ff55b133e4567d4e28102695e4be885405c337101cc9e8e0b912e59"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.421324 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" containerID="cri-o://61f2f4117ff55b133e4567d4e28102695e4be885405c337101cc9e8e0b912e59" gracePeriod=30 Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.880196 4787 patch_prober.go:28] interesting pod/console-967968ff4-947nc container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.137:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:26 crc kubenswrapper[4787]: I0219 20:37:26.880271 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-967968ff4-947nc" podUID="e826d3df-6eed-485f-b12d-d1fdee12d975" containerName="console" probeResult="failure" output="Get \"https://10.217.0.137:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.003941 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.004256 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.221797 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.221861 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.233988 4787 generic.go:334] "Generic (PLEG): container finished" podID="aae80a85-0afc-42a9-817a-57570462dee1" containerID="82af365d739726b31975ef733236a15ebe90214686e2605afa3d19f909cb99be" exitCode=1 Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.234042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" event={"ID":"aae80a85-0afc-42a9-817a-57570462dee1","Type":"ContainerDied","Data":"82af365d739726b31975ef733236a15ebe90214686e2605afa3d19f909cb99be"} Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.235634 4787 scope.go:117] "RemoveContainer" containerID="82af365d739726b31975ef733236a15ebe90214686e2605afa3d19f909cb99be" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.293835 4787 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-79vrf container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.293895 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" podUID="8d1e59ca-5c78-4454-a99a-71fe888c607c" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.293906 4787 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-79vrf container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.293945 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-79vrf" podUID="8d1e59ca-5c78-4454-a99a-71fe888c607c" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.361763 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.361834 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.420276 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.420346 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: E0219 20:37:27.819209 4787 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.934326 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 20:37:27 crc kubenswrapper[4787]: I0219 20:37:27.934392 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.143625 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-jvnpx" podUID="1f861785-2aa2-4b3b-aca7-90a83d68bcd8" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.142891 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-jvnpx" podUID="1f861785-2aa2-4b3b-aca7-90a83d68bcd8" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.144915 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-pgqnv" podUID="5b15ff10-c8f6-43ca-9538-e781e30d1842" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.144978 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.145000 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pgqnv" podUID="5b15ff10-c8f6-43ca-9538-e781e30d1842" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.145055 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.145160 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.146996 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"c9c41d14362b0ee2efd84b2ed33a65f829082643f053579f9a9bb3dec964ebdf"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.147100 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" containerID="cri-o://c9c41d14362b0ee2efd84b2ed33a65f829082643f053579f9a9bb3dec964ebdf" gracePeriod=30 Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.227673 4787 patch_prober.go:28] interesting pod/controller-manager-85564cdc67-nsqlz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.227749 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" podUID="6984a09d-3652-4db8-bae6-874ba82dd3a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.227832 4787 patch_prober.go:28] interesting pod/controller-manager-85564cdc67-nsqlz container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.227855 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-85564cdc67-nsqlz" podUID="6984a09d-3652-4db8-bae6-874ba82dd3a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.241358 4787 patch_prober.go:28] interesting pod/route-controller-manager-6644687b85-9nz2x container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.241393 4787 patch_prober.go:28] interesting pod/route-controller-manager-6644687b85-9nz2x container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.241444 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" podUID="b3ac3bda-4e18-4352-b2bf-ee28a2d059ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.241635 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6644687b85-9nz2x" podUID="b3ac3bda-4e18-4352-b2bf-ee28a2d059ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.247136 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" event={"ID":"c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4","Type":"ContainerStarted","Data":"c6eb54373c6dd6388457cf748ae7b94aca6169d61f0b26270eb164c38c5a9ad8"} Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.248161 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.263988 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="659dcc4f-0134-40f4-a6ee-150bb5dee79b" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.513019 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-b5dd774c6-bjggj" podUID="5273ac77-af0e-4a20-aa52-708ac057cfdc" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.513584 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" podUID="58f1bc3e-9217-48c3-80af-e4979969b991" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8080/readyz\": dial tcp 10.217.0.93:8080: connect: connection refused" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.597272 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" podUID="26a6b075-ab07-4508-86f7-2af4934e078a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": dial tcp 10.217.0.114:8081: connect: connection refused" Feb 19 20:37:28 crc kubenswrapper[4787]: I0219 20:37:28.597373 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" podUID="26a6b075-ab07-4508-86f7-2af4934e078a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": dial tcp 10.217.0.114:8081: connect: connection refused" Feb 19 20:37:28 crc kubenswrapper[4787]: E0219 20:37:28.936471 4787 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.041897 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" podUID="0fdbbc7b-81f4-401b-8df0-59417ab3ec18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.165836 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" podUID="0fdbbc7b-81f4-401b-8df0-59417ab3ec18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.165909 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" podUID="b7deddaa-9e2a-4e95-8dce-fb6b70a0523e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.165946 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-hz9f6" podUID="b7deddaa-9e2a-4e95-8dce-fb6b70a0523e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.178945 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.247854 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gdqt5 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.247864 4787 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gdqt5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.247921 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" podUID="34c814cb-c6f7-48b1-8153-e532e5f71bc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.247979 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gdqt5" podUID="34c814cb-c6f7-48b1-8153-e532e5f71bc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.248158 4787 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.248185 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.259595 4787 generic.go:334] "Generic (PLEG): container finished" podID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerID="493c53de7029af3783afa793495b5484e2379540a76990fdaa0b08a849a712f3" exitCode=0 Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.259776 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" event={"ID":"1f9f0394-f71b-48e3-a338-e824cdbb8c69","Type":"ContainerDied","Data":"493c53de7029af3783afa793495b5484e2379540a76990fdaa0b08a849a712f3"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.261882 4787 generic.go:334] "Generic (PLEG): container finished" podID="26a6b075-ab07-4508-86f7-2af4934e078a" containerID="cd5d6deadf1820db4ff17db918a4b10acb56ebf37cf463634d6ae5c6b85582e6" exitCode=1 Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.261951 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" event={"ID":"26a6b075-ab07-4508-86f7-2af4934e078a","Type":"ContainerDied","Data":"cd5d6deadf1820db4ff17db918a4b10acb56ebf37cf463634d6ae5c6b85582e6"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.263456 4787 scope.go:117] "RemoveContainer" containerID="cd5d6deadf1820db4ff17db918a4b10acb56ebf37cf463634d6ae5c6b85582e6" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.263691 4787 generic.go:334] "Generic (PLEG): container finished" podID="78a45da3-619d-4cc4-a819-6dad66a61737" containerID="d3332aeb4216ec66dca10c3867a0caf465a6bd6db1b2e60f855a48fa407eefa6" exitCode=1 Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.263779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" event={"ID":"78a45da3-619d-4cc4-a819-6dad66a61737","Type":"ContainerDied","Data":"d3332aeb4216ec66dca10c3867a0caf465a6bd6db1b2e60f855a48fa407eefa6"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.265239 4787 scope.go:117] "RemoveContainer" containerID="d3332aeb4216ec66dca10c3867a0caf465a6bd6db1b2e60f855a48fa407eefa6" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.267397 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" event={"ID":"aae80a85-0afc-42a9-817a-57570462dee1","Type":"ContainerStarted","Data":"ad7845b1499ae974e0245b29584adb1e3d8b21b2a7c2e2c97de611b2902fbff8"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.267975 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.294373 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-vkbfk_42b96086-3538-440d-a1f9-cd86de6191c7/console-operator/0.log" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.294448 4787 generic.go:334] "Generic (PLEG): container finished" podID="42b96086-3538-440d-a1f9-cd86de6191c7" containerID="f68a0763fbd50bbb371afa97cfce2cd209396da986095a15dc90fecd445432bd" exitCode=1 Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.294534 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" event={"ID":"42b96086-3538-440d-a1f9-cd86de6191c7","Type":"ContainerDied","Data":"f68a0763fbd50bbb371afa97cfce2cd209396da986095a15dc90fecd445432bd"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.305661 4787 generic.go:334] "Generic (PLEG): container finished" podID="b0d95611-e321-46d4-ba78-b847021133c9" containerID="21850b0dc91b2907c8706d530d3730da0a372cfda316ac9a3b71c93b9fb6a731" exitCode=0 Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.305751 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" event={"ID":"b0d95611-e321-46d4-ba78-b847021133c9","Type":"ContainerDied","Data":"21850b0dc91b2907c8706d530d3730da0a372cfda316ac9a3b71c93b9fb6a731"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.309458 4787 generic.go:334] "Generic (PLEG): container finished" podID="58f1bc3e-9217-48c3-80af-e4979969b991" containerID="5f06b202c4b295287954fa1ff89b8167472dbcbc71c73f01f154be1e3c7b9ed4" exitCode=1 Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.310660 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" event={"ID":"58f1bc3e-9217-48c3-80af-e4979969b991","Type":"ContainerDied","Data":"5f06b202c4b295287954fa1ff89b8167472dbcbc71c73f01f154be1e3c7b9ed4"} Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.311754 4787 scope.go:117] "RemoveContainer" containerID="5f06b202c4b295287954fa1ff89b8167472dbcbc71c73f01f154be1e3c7b9ed4" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.318143 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-tjbh7 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.318192 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-tjbh7" podUID="47705ce6-ef81-47a2-bcd3-a10b7bb9317a" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.337695 4787 patch_prober.go:28] interesting pod/logging-loki-gateway-65d54b8875-96vjl container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.338043 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-65d54b8875-96vjl" podUID="ffe2a444-f47e-4193-b322-5943bf473b44" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.412782 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" podUID="4a47b4c3-d7f4-4194-bd9c-fdef06d3450d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.412786 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" podUID="4a47b4c3-d7f4-4194-bd9c-fdef06d3450d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.495316 4787 trace.go:236] Trace[1397181268]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (19-Feb-2026 20:37:27.078) (total time: 2406ms): Feb 19 20:37:29 crc kubenswrapper[4787]: Trace[1397181268]: [2.406606027s] [2.406606027s] END Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.525649 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="e7a69404-5a15-40e5-bd22-faa4493739fa" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.3:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.525732 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="e7a69404-5a15-40e5-bd22-faa4493739fa" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.3:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.663783 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podUID="46df12dd-6fd4-4508-8141-ef1cc6551d79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.663970 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" podUID="6e92b566-c5a6-40e8-be75-5de416385888" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.710731 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" podUID="6e92b566-c5a6-40e8-be75-5de416385888" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.710744 4787 patch_prober.go:28] interesting pod/metrics-server-56f6f44749-gt422 container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.710833 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" podUID="2eb464be-e241-48b6-8e55-47bea187dcb4" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.710975 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" podUID="46df12dd-6fd4-4508-8141-ef1cc6551d79" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.711271 4787 patch_prober.go:28] interesting pod/metrics-server-56f6f44749-gt422 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.713082 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-56f6f44749-gt422" podUID="2eb464be-e241-48b6-8e55-47bea187dcb4" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.724319 4787 patch_prober.go:28] interesting pod/oauth-openshift-57bcd9fbb-4clxf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.724349 4787 patch_prober.go:28] interesting pod/oauth-openshift-57bcd9fbb-4clxf container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.724397 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" podUID="4b08776c-d5da-4eba-b7bf-9a6e0c56c181" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.724410 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-57bcd9fbb-4clxf" podUID="4b08776c-d5da-4eba-b7bf-9a6e0c56c181" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.814022 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podUID="0944c0f9-ef54-46cc-be37-a59477312705" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.814045 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" podUID="0944c0f9-ef54-46cc-be37-a59477312705" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: I0219 20:37:29.814800 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-69988d54ff-ndzss" podUID="a2428ab4-02d6-4400-820b-995a002fb38c" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:29 crc kubenswrapper[4787]: E0219 20:37:29.862121 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-conmon-3a08f2349a8d49bd383839dd173b1db02d197e97761d1684a1a68fb824c76d5e.scope\": RecentStats: unable to find data in memory cache]" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.138323 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1ed994a0-dc89-48d6-a734-c6880120eaa5" containerName="prometheus" probeResult="failure" output="command timed out" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.203377 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.324661 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-vkbfk_42b96086-3538-440d-a1f9-cd86de6191c7/console-operator/0.log" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.325215 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" event={"ID":"42b96086-3538-440d-a1f9-cd86de6191c7","Type":"ContainerStarted","Data":"9caf8e1bea156f9f005476c05135e9101610d31fcaa934d58d255f46c14807f8"} Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.325358 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.325709 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.325750 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.328275 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" event={"ID":"b0d95611-e321-46d4-ba78-b847021133c9","Type":"ContainerStarted","Data":"647cd7b0566cffa11a640acbe94b674851e1e329aee806787322b52aeb99d73c"} Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.328491 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.328744 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.328788 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.330749 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.334167 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.334224 4787 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3a08f2349a8d49bd383839dd173b1db02d197e97761d1684a1a68fb824c76d5e" exitCode=1 Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.334315 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3a08f2349a8d49bd383839dd173b1db02d197e97761d1684a1a68fb824c76d5e"} Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.334367 4787 scope.go:117] "RemoveContainer" containerID="c98d771e531239d1330ba5a29726d69ac63cca6f52148c4eb3357e09cff718b9" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.336072 4787 scope.go:117] "RemoveContainer" containerID="3a08f2349a8d49bd383839dd173b1db02d197e97761d1684a1a68fb824c76d5e" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.611555 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-mvnk9" podUID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.611922 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-mvnk9" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.614163 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"b79d39fcaf357f8a5a0ce33fe464792f734009205e08c006ac00d45391fd2b76"} pod="metallb-system/frr-k8s-mvnk9" containerMessage="Container frr failed liveness probe, will be restarted" Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.614478 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-mvnk9" podUID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerName="frr" containerID="cri-o://b79d39fcaf357f8a5a0ce33fe464792f734009205e08c006ac00d45391fd2b76" gracePeriod=2 Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.935829 4787 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6rqjl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 20:37:30 crc kubenswrapper[4787]: I0219 20:37:30.936228 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" podUID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.347690 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" event={"ID":"58f1bc3e-9217-48c3-80af-e4979969b991","Type":"ContainerStarted","Data":"06c86f8a95d9916cdacdd2b1d185a6e1de153e41bf97fa7bdc5247390357ff92"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.348179 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.352494 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" event={"ID":"1f9f0394-f71b-48e3-a338-e824cdbb8c69","Type":"ContainerStarted","Data":"201d115b7d92663627bb3ee508b09605692fc77390b033616747b4d5f7d6b41e"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.353391 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.353451 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.353477 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.356235 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" event={"ID":"26a6b075-ab07-4508-86f7-2af4934e078a","Type":"ContainerStarted","Data":"9829d27cf5a906e8f4e4ebd03641fa9ab7f8ba72b633c550c36e50b32f848134"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.356466 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.358575 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" event={"ID":"78a45da3-619d-4cc4-a819-6dad66a61737","Type":"ContainerStarted","Data":"940d0dbf5c272210a6e159f7eca170e06c5f56b880fc399d6fcd55ce0beb4270"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.358773 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.361047 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.363435 4787 generic.go:334] "Generic (PLEG): container finished" podID="6e92b566-c5a6-40e8-be75-5de416385888" containerID="ca757893b96d7c2185db409334a734d6d78ff85fd53d470a37a482a076bda373" exitCode=1 Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.363545 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" event={"ID":"6e92b566-c5a6-40e8-be75-5de416385888","Type":"ContainerDied","Data":"ca757893b96d7c2185db409334a734d6d78ff85fd53d470a37a482a076bda373"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.365030 4787 scope.go:117] "RemoveContainer" containerID="ca757893b96d7c2185db409334a734d6d78ff85fd53d470a37a482a076bda373" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.365220 4787 generic.go:334] "Generic (PLEG): container finished" podID="9cdc475f-0036-4e63-8fd4-c1e44537668d" containerID="7aac84434deafa4cf9cdd72fad88200548166374bfc0361e5734046f8762dfca" exitCode=1 Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.365283 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" event={"ID":"9cdc475f-0036-4e63-8fd4-c1e44537668d","Type":"ContainerDied","Data":"7aac84434deafa4cf9cdd72fad88200548166374bfc0361e5734046f8762dfca"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.367009 4787 scope.go:117] "RemoveContainer" containerID="7aac84434deafa4cf9cdd72fad88200548166374bfc0361e5734046f8762dfca" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.367369 4787 generic.go:334] "Generic (PLEG): container finished" podID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerID="61f2f4117ff55b133e4567d4e28102695e4be885405c337101cc9e8e0b912e59" exitCode=0 Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.367435 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" event={"ID":"e4e7fab7-39a4-4134-93a4-11f57e017fa0","Type":"ContainerDied","Data":"61f2f4117ff55b133e4567d4e28102695e4be885405c337101cc9e8e0b912e59"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.369053 4787 generic.go:334] "Generic (PLEG): container finished" podID="00ef1a7b-bf28-4126-b60f-c79af3fde4da" containerID="a1c1a285f7eef5ea944ba4cdf8e537e91c8a81ef864bf120e03e2e0788f056fc" exitCode=1 Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.369106 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" event={"ID":"00ef1a7b-bf28-4126-b60f-c79af3fde4da","Type":"ContainerDied","Data":"a1c1a285f7eef5ea944ba4cdf8e537e91c8a81ef864bf120e03e2e0788f056fc"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.371114 4787 scope.go:117] "RemoveContainer" containerID="a1c1a285f7eef5ea944ba4cdf8e537e91c8a81ef864bf120e03e2e0788f056fc" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.371307 4787 generic.go:334] "Generic (PLEG): container finished" podID="a752c75e-1e1e-4d78-b82a-95f8df84523f" containerID="e3a513231a8635ef72ad38c296d16c6f1e4ba5f8506868af5bc2edfb626488cd" exitCode=1 Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.371380 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" event={"ID":"a752c75e-1e1e-4d78-b82a-95f8df84523f","Type":"ContainerDied","Data":"e3a513231a8635ef72ad38c296d16c6f1e4ba5f8506868af5bc2edfb626488cd"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.384137 4787 scope.go:117] "RemoveContainer" containerID="e3a513231a8635ef72ad38c296d16c6f1e4ba5f8506868af5bc2edfb626488cd" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.403885 4787 generic.go:334] "Generic (PLEG): container finished" podID="8eeee751-e7e9-412b-81cf-2bd7e702303d" containerID="b79d39fcaf357f8a5a0ce33fe464792f734009205e08c006ac00d45391fd2b76" exitCode=143 Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.405247 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerDied","Data":"b79d39fcaf357f8a5a0ce33fe464792f734009205e08c006ac00d45391fd2b76"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.405284 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mvnk9" event={"ID":"8eeee751-e7e9-412b-81cf-2bd7e702303d","Type":"ContainerStarted","Data":"6f46341ab374b91b51ccb653b27235a80c99ba608225d196242563bbbce909d4"} Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.406893 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.406935 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.407063 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.407093 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 20:37:31 crc kubenswrapper[4787]: I0219 20:37:31.991954 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.139081 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.139391 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.143115 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.146064 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.146750 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ecc65d51d2df1ace550269a8befb7e963846b0b2777e2f062fb718ede5377d48"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.437302 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" event={"ID":"9cdc475f-0036-4e63-8fd4-c1e44537668d","Type":"ContainerStarted","Data":"c165f9630df10c056dc1038c7ce9c32609830fa382ef88c830c02ea48cf2f517"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.438101 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.448434 4787 generic.go:334] "Generic (PLEG): container finished" podID="880dd943-ce91-4373-ab8a-fd5df0a44e2a" containerID="075f330daa49d44db4f7bafac1ae8166acd35567e1a2ca9ad4135105f5c8dcd7" exitCode=1 Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.448779 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" event={"ID":"880dd943-ce91-4373-ab8a-fd5df0a44e2a","Type":"ContainerDied","Data":"075f330daa49d44db4f7bafac1ae8166acd35567e1a2ca9ad4135105f5c8dcd7"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.450106 4787 scope.go:117] "RemoveContainer" containerID="075f330daa49d44db4f7bafac1ae8166acd35567e1a2ca9ad4135105f5c8dcd7" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.458959 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" event={"ID":"00ef1a7b-bf28-4126-b60f-c79af3fde4da","Type":"ContainerStarted","Data":"b3bc6c4fa29bf1ac848d2b673de079746bd2ad4119ff208f6e47fb46693c9b29"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.459343 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.468949 4787 generic.go:334] "Generic (PLEG): container finished" podID="4a47b4c3-d7f4-4194-bd9c-fdef06d3450d" containerID="b0c6b111d4ef249656af06dfe08162919cd4616bf432aee952fa282a8d5564cd" exitCode=1 Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.469035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" event={"ID":"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d","Type":"ContainerDied","Data":"b0c6b111d4ef249656af06dfe08162919cd4616bf432aee952fa282a8d5564cd"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.471792 4787 scope.go:117] "RemoveContainer" containerID="b0c6b111d4ef249656af06dfe08162919cd4616bf432aee952fa282a8d5564cd" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.539026 4787 generic.go:334] "Generic (PLEG): container finished" podID="32ca8e62-696d-4f05-9ba2-b8fbc20e407f" containerID="c03225a17bff3615ce528e5bf2ba637e9189d79a6d3e090fefcecf0d21d237b0" exitCode=0 Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.539075 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" event={"ID":"32ca8e62-696d-4f05-9ba2-b8fbc20e407f","Type":"ContainerDied","Data":"c03225a17bff3615ce528e5bf2ba637e9189d79a6d3e090fefcecf0d21d237b0"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.539281 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.541665 4787 generic.go:334] "Generic (PLEG): container finished" podID="0fdbbc7b-81f4-401b-8df0-59417ab3ec18" containerID="19b1e79e330dcb597722585a9292953b168b0bed612199382bf4292fda97b8ec" exitCode=1 Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.541718 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" event={"ID":"0fdbbc7b-81f4-401b-8df0-59417ab3ec18","Type":"ContainerDied","Data":"19b1e79e330dcb597722585a9292953b168b0bed612199382bf4292fda97b8ec"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.542480 4787 scope.go:117] "RemoveContainer" containerID="19b1e79e330dcb597722585a9292953b168b0bed612199382bf4292fda97b8ec" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.544446 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" event={"ID":"e4e7fab7-39a4-4134-93a4-11f57e017fa0","Type":"ContainerStarted","Data":"20c19d3129980dc587fbf14ccf51687b91acae35f42a1f8d8418878ad714d317"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.545326 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.545389 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.545410 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.549215 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.554130 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d54334dd440dfa6396a5cc6ac5e6ed52eb14da1ff3f77a45d08219b4cce016b2"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.569041 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" event={"ID":"a752c75e-1e1e-4d78-b82a-95f8df84523f","Type":"ContainerStarted","Data":"fc7815dfd52b18b18f95034ec80d044254e6acbec2ade62be0a8279923bb67cf"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.569577 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.579796 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" event={"ID":"6e92b566-c5a6-40e8-be75-5de416385888","Type":"ContainerStarted","Data":"7b9fb561b359bf7a140996555f78ee75745d992c8835a8220cef015d2fc5e77f"} Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.580949 4787 status_manager.go:317] "Container readiness changed for unknown container" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" containerID="cri-o://ca757893b96d7c2185db409334a734d6d78ff85fd53d470a37a482a076bda373" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.580977 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.581563 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.581596 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.950768 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.951174 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.950836 4787 patch_prober.go:28] interesting pod/console-operator-58897d9998-vkbfk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 20:37:32 crc kubenswrapper[4787]: I0219 20:37:32.951260 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" podUID="42b96086-3538-440d-a1f9-cd86de6191c7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.140050 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.140172 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.140195 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" probeResult="failure" output="command timed out" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.140313 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.141463 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"5baf6aff915f430780af9e288041e5e98fa43631f9a95254e345d55b6ad290d2"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.142458 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-zrns9" podUID="de424d05-0977-4dac-8bd9-01c37cf49d4e" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.146063 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-zrns9" podUID="de424d05-0977-4dac-8bd9-01c37cf49d4e" containerName="registry-server" probeResult="failure" output="command timed out" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.401159 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.599958 4787 generic.go:334] "Generic (PLEG): container finished" podID="46df12dd-6fd4-4508-8141-ef1cc6551d79" containerID="dd9dce45700eefba0df994264075c6a68990ccd0daf840432e73b8ce497777bf" exitCode=1 Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.600067 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" event={"ID":"46df12dd-6fd4-4508-8141-ef1cc6551d79","Type":"ContainerDied","Data":"dd9dce45700eefba0df994264075c6a68990ccd0daf840432e73b8ce497777bf"} Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.601021 4787 scope.go:117] "RemoveContainer" containerID="dd9dce45700eefba0df994264075c6a68990ccd0daf840432e73b8ce497777bf" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.605076 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" event={"ID":"4a47b4c3-d7f4-4194-bd9c-fdef06d3450d","Type":"ContainerStarted","Data":"ef17c89098f412ed1ada1683e17961d149a9667e0f2e9fe3cdfe5c242cdca14f"} Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.605299 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.616338 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" event={"ID":"32ca8e62-696d-4f05-9ba2-b8fbc20e407f","Type":"ContainerStarted","Data":"343d1eff353b8ca74a28aa17f36c27f959b3c9f0a0d80b68b0d523b725377bf0"} Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.619711 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" event={"ID":"0fdbbc7b-81f4-401b-8df0-59417ab3ec18","Type":"ContainerStarted","Data":"d7fad8e9e872659ccc29d2335d26bb804eb0df4dd9ad051de1d05249f6795806"} Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.619946 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.625127 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" event={"ID":"880dd943-ce91-4373-ab8a-fd5df0a44e2a","Type":"ContainerStarted","Data":"3ea1fb7d29cf21f1ac656661f1a7229546decfe994a02684a3518347563bbf96"} Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.625449 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.626936 4787 generic.go:334] "Generic (PLEG): container finished" podID="0944c0f9-ef54-46cc-be37-a59477312705" containerID="28ae7749042adad40aadabb58f63c7615de1d59d26aad161c8b727d0cb1e6696" exitCode=1 Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.627062 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" event={"ID":"0944c0f9-ef54-46cc-be37-a59477312705","Type":"ContainerDied","Data":"28ae7749042adad40aadabb58f63c7615de1d59d26aad161c8b727d0cb1e6696"} Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.627713 4787 scope.go:117] "RemoveContainer" containerID="28ae7749042adad40aadabb58f63c7615de1d59d26aad161c8b727d0cb1e6696" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.627917 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.636131 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.636178 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.874835 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerName="galera" containerID="cri-o://5baf6aff915f430780af9e288041e5e98fa43631f9a95254e345d55b6ad290d2" gracePeriod=30 Feb 19 20:37:33 crc kubenswrapper[4787]: I0219 20:37:33.879856 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="5848c368-e71c-439d-bfca-f241813f9136" containerName="galera" containerID="cri-o://ecc65d51d2df1ace550269a8befb7e963846b0b2777e2f062fb718ede5377d48" gracePeriod=29 Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.567597 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mvnk9" Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.616592 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mvnk9" Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.639014 4787 generic.go:334] "Generic (PLEG): container finished" podID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerID="c9c41d14362b0ee2efd84b2ed33a65f829082643f053579f9a9bb3dec964ebdf" exitCode=0 Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.639091 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerDied","Data":"c9c41d14362b0ee2efd84b2ed33a65f829082643f053579f9a9bb3dec964ebdf"} Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.639142 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerStarted","Data":"892d4c385aa5f4c2ae9e45642268944fd160f895e5e91b44055abe59d979f294"} Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.641240 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" event={"ID":"0944c0f9-ef54-46cc-be37-a59477312705","Type":"ContainerStarted","Data":"822264b721c453e384dba545214239ccca5912d020ce99a2de2517490d81dbfa"} Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.641486 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.644595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" event={"ID":"46df12dd-6fd4-4508-8141-ef1cc6551d79","Type":"ContainerStarted","Data":"f96f34cec6455e3d6f46495b2c0f125dd5f4361a0b115548668d1a85ffae2efe"} Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.645936 4787 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-qfzrp container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.645989 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" podUID="e4e7fab7-39a4-4134-93a4-11f57e017fa0" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 19 20:37:34 crc kubenswrapper[4787]: I0219 20:37:34.646042 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.024347 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.024714 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.024509 4787 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-p9djm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.024787 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" podUID="1f9f0394-f71b-48e3-a338-e824cdbb8c69" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.358991 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.359025 4787 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nh5mn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.359050 4787 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.359086 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" podUID="b0d95611-e321-46d4-ba78-b847021133c9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.469098 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-qfzrp" Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.671796 4787 generic.go:334] "Generic (PLEG): container finished" podID="fec6d8b2-4d43-4053-8028-747e6d28f7c4" containerID="5baf6aff915f430780af9e288041e5e98fa43631f9a95254e345d55b6ad290d2" exitCode=0 Feb 19 20:37:35 crc kubenswrapper[4787]: I0219 20:37:35.672835 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fec6d8b2-4d43-4053-8028-747e6d28f7c4","Type":"ContainerDied","Data":"5baf6aff915f430780af9e288041e5e98fa43631f9a95254e345d55b6ad290d2"} Feb 19 20:37:36 crc kubenswrapper[4787]: I0219 20:37:36.572186 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:37:36 crc kubenswrapper[4787]: I0219 20:37:36.683689 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fec6d8b2-4d43-4053-8028-747e6d28f7c4","Type":"ContainerStarted","Data":"b73d452334d9b515a18e42578e4eca3e2f33604b8766598914c96510b0243959"} Feb 19 20:37:37 crc kubenswrapper[4787]: I0219 20:37:37.134772 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6rqjl" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.001850 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-zvvkw" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.271797 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-9p6x4" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.331182 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-66hj2" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.490358 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-w557k" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.496738 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-5g7hg" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.550446 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-bbnhv" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.598527 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-88xzz" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.623239 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-j2ktf" Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.713069 4787 generic.go:334] "Generic (PLEG): container finished" podID="5848c368-e71c-439d-bfca-f241813f9136" containerID="ecc65d51d2df1ace550269a8befb7e963846b0b2777e2f062fb718ede5377d48" exitCode=0 Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.713123 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5848c368-e71c-439d-bfca-f241813f9136","Type":"ContainerDied","Data":"ecc65d51d2df1ace550269a8befb7e963846b0b2777e2f062fb718ede5377d48"} Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.713153 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5848c368-e71c-439d-bfca-f241813f9136","Type":"ContainerStarted","Data":"2740993fd99ae07aface0cdc319b8554f73e76e542cc7645daf7686876f77358"} Feb 19 20:37:38 crc kubenswrapper[4787]: I0219 20:37:38.805946 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-x8ltf" Feb 19 20:37:39 crc kubenswrapper[4787]: I0219 20:37:39.212215 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:37:39 crc kubenswrapper[4787]: I0219 20:37:39.446381 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:37:39 crc kubenswrapper[4787]: I0219 20:37:39.547928 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-55fc987df5-9spp8" Feb 19 20:37:40 crc kubenswrapper[4787]: I0219 20:37:40.406843 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 20:37:40 crc kubenswrapper[4787]: I0219 20:37:40.408298 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 20:37:41 crc kubenswrapper[4787]: I0219 20:37:41.835080 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 20:37:41 crc kubenswrapper[4787]: I0219 20:37:41.835152 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 20:37:41 crc kubenswrapper[4787]: I0219 20:37:41.991392 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:37:41 crc kubenswrapper[4787]: I0219 20:37:41.995865 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:37:42 crc kubenswrapper[4787]: I0219 20:37:42.457896 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:37:42 crc kubenswrapper[4787]: I0219 20:37:42.457977 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 20:37:42 crc kubenswrapper[4787]: I0219 20:37:42.460427 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"394e0571e8f9522993a1e101cfdc0de71d9e9aa349755df6bc7cb478eccbe931"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 19 20:37:42 crc kubenswrapper[4787]: I0219 20:37:42.460512 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" containerID="cri-o://394e0571e8f9522993a1e101cfdc0de71d9e9aa349755df6bc7cb478eccbe931" gracePeriod=30 Feb 19 20:37:42 crc kubenswrapper[4787]: I0219 20:37:42.760000 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 20:37:42 crc kubenswrapper[4787]: I0219 20:37:42.955066 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vkbfk" Feb 19 20:37:43 crc kubenswrapper[4787]: I0219 20:37:43.768408 4787 generic.go:334] "Generic (PLEG): container finished" podID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerID="394e0571e8f9522993a1e101cfdc0de71d9e9aa349755df6bc7cb478eccbe931" exitCode=0 Feb 19 20:37:43 crc kubenswrapper[4787]: I0219 20:37:43.768767 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f4e423c-1e8b-47e3-af08-1190ee8942aa","Type":"ContainerDied","Data":"394e0571e8f9522993a1e101cfdc0de71d9e9aa349755df6bc7cb478eccbe931"} Feb 19 20:37:44 crc kubenswrapper[4787]: I0219 20:37:44.276986 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc" Feb 19 20:37:45 crc kubenswrapper[4787]: I0219 20:37:45.035796 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-p9djm" Feb 19 20:37:45 crc kubenswrapper[4787]: I0219 20:37:45.363623 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nh5mn" Feb 19 20:37:46 crc kubenswrapper[4787]: I0219 20:37:46.804409 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f4e423c-1e8b-47e3-af08-1190ee8942aa","Type":"ContainerStarted","Data":"02f158b23669c87e56a6ad84cde876d7ace3623816920d003d4a1d6a409cd9b1"} Feb 19 20:37:48 crc kubenswrapper[4787]: I0219 20:37:48.527118 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-77xc6" Feb 19 20:37:48 crc kubenswrapper[4787]: I0219 20:37:48.727476 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q46b7" Feb 19 20:37:50 crc kubenswrapper[4787]: I0219 20:37:50.425004 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 20:37:55 crc kubenswrapper[4787]: I0219 20:37:55.441318 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:38:00 crc kubenswrapper[4787]: I0219 20:38:00.640698 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:38:02 crc kubenswrapper[4787]: I0219 20:38:02.973694 4787 trace.go:236] Trace[11458724]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (19-Feb-2026 20:38:01.932) (total time: 1036ms): Feb 19 20:38:02 crc kubenswrapper[4787]: Trace[11458724]: [1.036766246s] [1.036766246s] END Feb 19 20:38:05 crc kubenswrapper[4787]: I0219 20:38:05.444755 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:38:08 crc kubenswrapper[4787]: I0219 20:38:08.516374 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59989c9b4f-q9rqs" Feb 19 20:38:09 crc kubenswrapper[4787]: I0219 20:38:09.263421 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:38:09 crc kubenswrapper[4787]: I0219 20:38:09.263777 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:38:10 crc kubenswrapper[4787]: I0219 20:38:10.466665 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:38:11 crc kubenswrapper[4787]: I0219 20:38:11.536091 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:11 crc kubenswrapper[4787]: I0219 20:38:11.544469 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-notification-agent" containerID="cri-o://e55a7f1f9107aa241086f0f06c7208b97262784369b4ca5670f181e19a93e1d8" gracePeriod=30 Feb 19 20:38:11 crc kubenswrapper[4787]: I0219 20:38:11.544562 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="proxy-httpd" containerID="cri-o://8b218b1e5846713b8a5bf6a923f4349f95c146c6212415457aa6f4b349e6e953" gracePeriod=30 Feb 19 20:38:11 crc kubenswrapper[4787]: I0219 20:38:11.544563 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" containerID="cri-o://892d4c385aa5f4c2ae9e45642268944fd160f895e5e91b44055abe59d979f294" gracePeriod=30 Feb 19 20:38:11 crc kubenswrapper[4787]: I0219 20:38:11.544570 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="sg-core" containerID="cri-o://c093d8c6a13061fd930d9992dd9777dd7e91994efdca82deea207bc06aad7b79" gracePeriod=30 Feb 19 20:38:12 crc kubenswrapper[4787]: I0219 20:38:12.138297 4787 generic.go:334] "Generic (PLEG): container finished" podID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerID="c093d8c6a13061fd930d9992dd9777dd7e91994efdca82deea207bc06aad7b79" exitCode=2 Feb 19 20:38:12 crc kubenswrapper[4787]: I0219 20:38:12.138356 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerDied","Data":"c093d8c6a13061fd930d9992dd9777dd7e91994efdca82deea207bc06aad7b79"} Feb 19 20:38:13 crc kubenswrapper[4787]: I0219 20:38:13.175716 4787 generic.go:334] "Generic (PLEG): container finished" podID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerID="892d4c385aa5f4c2ae9e45642268944fd160f895e5e91b44055abe59d979f294" exitCode=0 Feb 19 20:38:13 crc kubenswrapper[4787]: I0219 20:38:13.176010 4787 generic.go:334] "Generic (PLEG): container finished" podID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerID="8b218b1e5846713b8a5bf6a923f4349f95c146c6212415457aa6f4b349e6e953" exitCode=0 Feb 19 20:38:13 crc kubenswrapper[4787]: I0219 20:38:13.176031 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerDied","Data":"892d4c385aa5f4c2ae9e45642268944fd160f895e5e91b44055abe59d979f294"} Feb 19 20:38:13 crc kubenswrapper[4787]: I0219 20:38:13.176057 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerDied","Data":"8b218b1e5846713b8a5bf6a923f4349f95c146c6212415457aa6f4b349e6e953"} Feb 19 20:38:13 crc kubenswrapper[4787]: I0219 20:38:13.179217 4787 scope.go:117] "RemoveContainer" containerID="c9c41d14362b0ee2efd84b2ed33a65f829082643f053579f9a9bb3dec964ebdf" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.208140 4787 generic.go:334] "Generic (PLEG): container finished" podID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerID="e55a7f1f9107aa241086f0f06c7208b97262784369b4ca5670f181e19a93e1d8" exitCode=0 Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.208187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerDied","Data":"e55a7f1f9107aa241086f0f06c7208b97262784369b4ca5670f181e19a93e1d8"} Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.749276 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.832749 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-run-httpd\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.833200 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-combined-ca-bundle\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.833454 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-scripts\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.833630 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-ceilometer-tls-certs\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.833786 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-sg-core-conf-yaml\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.833922 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-log-httpd\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.834061 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tmnz\" (UniqueName: \"kubernetes.io/projected/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-kube-api-access-5tmnz\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.834184 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-config-data\") pod \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\" (UID: \"c0dc3035-d2a7-4db9-bd9e-ae471ff65222\") " Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.841140 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.843080 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.857682 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-kube-api-access-5tmnz" (OuterVolumeSpecName: "kube-api-access-5tmnz") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "kube-api-access-5tmnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.858580 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-scripts" (OuterVolumeSpecName: "scripts") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.932847 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.938219 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.938271 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.938287 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.938300 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tmnz\" (UniqueName: \"kubernetes.io/projected/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-kube-api-access-5tmnz\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:14 crc kubenswrapper[4787]: I0219 20:38:14.938312 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.003091 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.022771 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.041497 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.041534 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.087568 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-config-data" (OuterVolumeSpecName: "config-data") pod "c0dc3035-d2a7-4db9-bd9e-ae471ff65222" (UID: "c0dc3035-d2a7-4db9-bd9e-ae471ff65222"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.143834 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0dc3035-d2a7-4db9-bd9e-ae471ff65222-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.224784 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0dc3035-d2a7-4db9-bd9e-ae471ff65222","Type":"ContainerDied","Data":"898b6f0e71fdecf1cba1ba32ae9c388365ccfb7fcd03b968448961d95035e306"} Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.224836 4787 scope.go:117] "RemoveContainer" containerID="892d4c385aa5f4c2ae9e45642268944fd160f895e5e91b44055abe59d979f294" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.225178 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.270825 4787 scope.go:117] "RemoveContainer" containerID="8b218b1e5846713b8a5bf6a923f4349f95c146c6212415457aa6f4b349e6e953" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.280435 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.290424 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.305483 4787 scope.go:117] "RemoveContainer" containerID="c093d8c6a13061fd930d9992dd9777dd7e91994efdca82deea207bc06aad7b79" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.342057 4787 scope.go:117] "RemoveContainer" containerID="e55a7f1f9107aa241086f0f06c7208b97262784369b4ca5670f181e19a93e1d8" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.388932 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409744 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409784 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409840 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="extract-content" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409847 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="extract-content" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409860 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-notification-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409865 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-notification-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409878 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="sg-core" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409887 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="sg-core" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409904 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="extract-utilities" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409910 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="extract-utilities" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409922 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="proxy-httpd" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409929 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="proxy-httpd" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.409936 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="registry-server" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.409942 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="registry-server" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410220 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="sg-core" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410244 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410257 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410264 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="proxy-httpd" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410275 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aea2922-b956-4ae3-a1ce-8e5378029fe5" containerName="registry-server" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410285 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-notification-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: E0219 20:38:15.410513 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.410522 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" containerName="ceilometer-central-agent" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.419715 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.427324 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.427324 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.435147 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.483148 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.484579 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="3f4e423c-1e8b-47e3-af08-1190ee8942aa" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.590565 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdww\" (UniqueName: \"kubernetes.io/projected/59d17580-e5e2-4a0a-bfd1-951487271b7a-kube-api-access-grdww\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.590636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.591647 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.591717 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-log-httpd\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.591756 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-config-data\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.591821 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-scripts\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.591866 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.591955 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-run-httpd\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696408 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grdww\" (UniqueName: \"kubernetes.io/projected/59d17580-e5e2-4a0a-bfd1-951487271b7a-kube-api-access-grdww\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696462 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696547 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696590 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-log-httpd\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696639 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-config-data\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696691 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-scripts\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696717 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.696759 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-run-httpd\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.703235 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-log-httpd\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.704072 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-run-httpd\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.710630 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.711620 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.715148 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.729029 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-scripts\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.737969 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-config-data\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.754946 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grdww\" (UniqueName: \"kubernetes.io/projected/59d17580-e5e2-4a0a-bfd1-951487271b7a-kube-api-access-grdww\") pod \"ceilometer-0\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " pod="openstack/ceilometer-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.885613 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 20:38:15 crc kubenswrapper[4787]: I0219 20:38:15.992834 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 20:38:16 crc kubenswrapper[4787]: I0219 20:38:16.047061 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:16 crc kubenswrapper[4787]: I0219 20:38:16.764857 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:16 crc kubenswrapper[4787]: W0219 20:38:16.768710 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59d17580_e5e2_4a0a_bfd1_951487271b7a.slice/crio-5458f861017f0e2c1fe286c187e55c419d72ebf8a872323426e7a1e167b093e9 WatchSource:0}: Error finding container 5458f861017f0e2c1fe286c187e55c419d72ebf8a872323426e7a1e167b093e9: Status 404 returned error can't find the container with id 5458f861017f0e2c1fe286c187e55c419d72ebf8a872323426e7a1e167b093e9 Feb 19 20:38:16 crc kubenswrapper[4787]: I0219 20:38:16.904597 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0dc3035-d2a7-4db9-bd9e-ae471ff65222" path="/var/lib/kubelet/pods/c0dc3035-d2a7-4db9-bd9e-ae471ff65222/volumes" Feb 19 20:38:17 crc kubenswrapper[4787]: I0219 20:38:17.134191 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 20:38:17 crc kubenswrapper[4787]: I0219 20:38:17.252710 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerStarted","Data":"5458f861017f0e2c1fe286c187e55c419d72ebf8a872323426e7a1e167b093e9"} Feb 19 20:38:17 crc kubenswrapper[4787]: I0219 20:38:17.256697 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 20:38:18 crc kubenswrapper[4787]: I0219 20:38:18.265154 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerStarted","Data":"41b04699f21d06b0d6734444b38ad75ea380c0fc51a64dbe7d250875f21d4943"} Feb 19 20:38:18 crc kubenswrapper[4787]: I0219 20:38:18.265816 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerStarted","Data":"f72a61791b873636800e0062ab7391c556c2c7457a131b4a515bf5bc35522744"} Feb 19 20:38:19 crc kubenswrapper[4787]: I0219 20:38:19.279680 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerStarted","Data":"a108a85c73f9360bcb0f0bf829345caa7271f72b5282256c0914bc5aa5990aaf"} Feb 19 20:38:20 crc kubenswrapper[4787]: I0219 20:38:20.466696 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 20:38:21 crc kubenswrapper[4787]: I0219 20:38:21.303929 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerStarted","Data":"d8070c051d41ae46ca3430e4e4bd6b70cc340bbdd428324e1d9c6e2385726671"} Feb 19 20:38:21 crc kubenswrapper[4787]: I0219 20:38:21.304254 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 20:38:23 crc kubenswrapper[4787]: I0219 20:38:23.681318 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.678187394 podStartE2EDuration="8.680572069s" podCreationTimestamp="2026-02-19 20:38:15 +0000 UTC" firstStartedPulling="2026-02-19 20:38:16.769908253 +0000 UTC m=+4764.560574195" lastFinishedPulling="2026-02-19 20:38:20.772292928 +0000 UTC m=+4768.562958870" observedRunningTime="2026-02-19 20:38:21.330878637 +0000 UTC m=+4769.121544599" watchObservedRunningTime="2026-02-19 20:38:23.680572069 +0000 UTC m=+4771.471238011" Feb 19 20:38:23 crc kubenswrapper[4787]: I0219 20:38:23.695234 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:23 crc kubenswrapper[4787]: I0219 20:38:23.695587 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-central-agent" containerID="cri-o://f72a61791b873636800e0062ab7391c556c2c7457a131b4a515bf5bc35522744" gracePeriod=30 Feb 19 20:38:23 crc kubenswrapper[4787]: I0219 20:38:23.695670 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="proxy-httpd" containerID="cri-o://d8070c051d41ae46ca3430e4e4bd6b70cc340bbdd428324e1d9c6e2385726671" gracePeriod=30 Feb 19 20:38:23 crc kubenswrapper[4787]: I0219 20:38:23.695677 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="sg-core" containerID="cri-o://a108a85c73f9360bcb0f0bf829345caa7271f72b5282256c0914bc5aa5990aaf" gracePeriod=30 Feb 19 20:38:23 crc kubenswrapper[4787]: I0219 20:38:23.695697 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-notification-agent" containerID="cri-o://41b04699f21d06b0d6734444b38ad75ea380c0fc51a64dbe7d250875f21d4943" gracePeriod=30 Feb 19 20:38:24 crc kubenswrapper[4787]: I0219 20:38:24.338524 4787 generic.go:334] "Generic (PLEG): container finished" podID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerID="d8070c051d41ae46ca3430e4e4bd6b70cc340bbdd428324e1d9c6e2385726671" exitCode=0 Feb 19 20:38:24 crc kubenswrapper[4787]: I0219 20:38:24.338840 4787 generic.go:334] "Generic (PLEG): container finished" podID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerID="a108a85c73f9360bcb0f0bf829345caa7271f72b5282256c0914bc5aa5990aaf" exitCode=2 Feb 19 20:38:24 crc kubenswrapper[4787]: I0219 20:38:24.338854 4787 generic.go:334] "Generic (PLEG): container finished" podID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerID="41b04699f21d06b0d6734444b38ad75ea380c0fc51a64dbe7d250875f21d4943" exitCode=0 Feb 19 20:38:24 crc kubenswrapper[4787]: I0219 20:38:24.338586 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerDied","Data":"d8070c051d41ae46ca3430e4e4bd6b70cc340bbdd428324e1d9c6e2385726671"} Feb 19 20:38:24 crc kubenswrapper[4787]: I0219 20:38:24.338890 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerDied","Data":"a108a85c73f9360bcb0f0bf829345caa7271f72b5282256c0914bc5aa5990aaf"} Feb 19 20:38:24 crc kubenswrapper[4787]: I0219 20:38:24.338904 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerDied","Data":"41b04699f21d06b0d6734444b38ad75ea380c0fc51a64dbe7d250875f21d4943"} Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.500047 4787 generic.go:334] "Generic (PLEG): container finished" podID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerID="f72a61791b873636800e0062ab7391c556c2c7457a131b4a515bf5bc35522744" exitCode=0 Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.500279 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerDied","Data":"f72a61791b873636800e0062ab7391c556c2c7457a131b4a515bf5bc35522744"} Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.805973 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976473 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-ceilometer-tls-certs\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976553 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grdww\" (UniqueName: \"kubernetes.io/projected/59d17580-e5e2-4a0a-bfd1-951487271b7a-kube-api-access-grdww\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976642 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-run-httpd\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976699 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-sg-core-conf-yaml\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976732 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-combined-ca-bundle\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976758 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-config-data\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.976798 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-scripts\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.977072 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.977162 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-log-httpd\") pod \"59d17580-e5e2-4a0a-bfd1-951487271b7a\" (UID: \"59d17580-e5e2-4a0a-bfd1-951487271b7a\") " Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.977738 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.978439 4787 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:31 crc kubenswrapper[4787]: I0219 20:38:31.978464 4787 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59d17580-e5e2-4a0a-bfd1-951487271b7a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.441082 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59d17580-e5e2-4a0a-bfd1-951487271b7a-kube-api-access-grdww" (OuterVolumeSpecName: "kube-api-access-grdww") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "kube-api-access-grdww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.467452 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-scripts" (OuterVolumeSpecName: "scripts") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.495847 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.503300 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grdww\" (UniqueName: \"kubernetes.io/projected/59d17580-e5e2-4a0a-bfd1-951487271b7a-kube-api-access-grdww\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.503639 4787 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.503653 4787 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.540081 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59d17580-e5e2-4a0a-bfd1-951487271b7a","Type":"ContainerDied","Data":"5458f861017f0e2c1fe286c187e55c419d72ebf8a872323426e7a1e167b093e9"} Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.540139 4787 scope.go:117] "RemoveContainer" containerID="d8070c051d41ae46ca3430e4e4bd6b70cc340bbdd428324e1d9c6e2385726671" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.540386 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.659790 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.710564 4787 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.712753 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.713785 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-config-data" (OuterVolumeSpecName: "config-data") pod "59d17580-e5e2-4a0a-bfd1-951487271b7a" (UID: "59d17580-e5e2-4a0a-bfd1-951487271b7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.814801 4787 scope.go:117] "RemoveContainer" containerID="a108a85c73f9360bcb0f0bf829345caa7271f72b5282256c0914bc5aa5990aaf" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.816724 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.816769 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59d17580-e5e2-4a0a-bfd1-951487271b7a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.858116 4787 scope.go:117] "RemoveContainer" containerID="41b04699f21d06b0d6734444b38ad75ea380c0fc51a64dbe7d250875f21d4943" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.907740 4787 scope.go:117] "RemoveContainer" containerID="f72a61791b873636800e0062ab7391c556c2c7457a131b4a515bf5bc35522744" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.925402 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.931072 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.969662 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:32 crc kubenswrapper[4787]: E0219 20:38:32.970283 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="sg-core" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.970629 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="sg-core" Feb 19 20:38:32 crc kubenswrapper[4787]: E0219 20:38:32.970662 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="proxy-httpd" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.970671 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="proxy-httpd" Feb 19 20:38:32 crc kubenswrapper[4787]: E0219 20:38:32.970696 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-central-agent" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.970704 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-central-agent" Feb 19 20:38:32 crc kubenswrapper[4787]: E0219 20:38:32.970733 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-notification-agent" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.970741 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-notification-agent" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.971049 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-notification-agent" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.971077 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="sg-core" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.971092 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="proxy-httpd" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.971114 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" containerName="ceilometer-central-agent" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.977330 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.981515 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.982207 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.982722 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 20:38:32 crc kubenswrapper[4787]: I0219 20:38:32.992127 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127198 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127273 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-config-data\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127328 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-scripts\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127370 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127414 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9b9\" (UniqueName: \"kubernetes.io/projected/182e6de6-87eb-490b-a614-82c6063752f9-kube-api-access-nh9b9\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127453 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127509 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/182e6de6-87eb-490b-a614-82c6063752f9-run-httpd\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.127549 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/182e6de6-87eb-490b-a614-82c6063752f9-log-httpd\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.229514 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9b9\" (UniqueName: \"kubernetes.io/projected/182e6de6-87eb-490b-a614-82c6063752f9-kube-api-access-nh9b9\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.229601 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.229715 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/182e6de6-87eb-490b-a614-82c6063752f9-run-httpd\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.229767 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/182e6de6-87eb-490b-a614-82c6063752f9-log-httpd\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.229901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.229976 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-config-data\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.230059 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-scripts\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.230126 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.231529 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/182e6de6-87eb-490b-a614-82c6063752f9-log-httpd\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.231572 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/182e6de6-87eb-490b-a614-82c6063752f9-run-httpd\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.235257 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.235552 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.235604 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-config-data\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.236066 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.236402 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/182e6de6-87eb-490b-a614-82c6063752f9-scripts\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.251484 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9b9\" (UniqueName: \"kubernetes.io/projected/182e6de6-87eb-490b-a614-82c6063752f9-kube-api-access-nh9b9\") pod \"ceilometer-0\" (UID: \"182e6de6-87eb-490b-a614-82c6063752f9\") " pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.306855 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 20:38:33 crc kubenswrapper[4787]: I0219 20:38:33.955784 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 20:38:34 crc kubenswrapper[4787]: I0219 20:38:34.585558 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"182e6de6-87eb-490b-a614-82c6063752f9","Type":"ContainerStarted","Data":"5f32eb1e99df9c1d3c2541f8f845ae2538d19b0729a0dacc283db32f881505da"} Feb 19 20:38:34 crc kubenswrapper[4787]: I0219 20:38:34.953216 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59d17580-e5e2-4a0a-bfd1-951487271b7a" path="/var/lib/kubelet/pods/59d17580-e5e2-4a0a-bfd1-951487271b7a/volumes" Feb 19 20:38:35 crc kubenswrapper[4787]: I0219 20:38:35.597087 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"182e6de6-87eb-490b-a614-82c6063752f9","Type":"ContainerStarted","Data":"5ac50488ec18fd73475dbe55123ab24b52add17691b355f1a1b6d61f4ef372d0"} Feb 19 20:38:36 crc kubenswrapper[4787]: I0219 20:38:36.616344 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"182e6de6-87eb-490b-a614-82c6063752f9","Type":"ContainerStarted","Data":"9031a1bd7a55c1892c6dd670c3629d64f017348831b587accc358aac2c0e8096"} Feb 19 20:38:37 crc kubenswrapper[4787]: I0219 20:38:37.632502 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"182e6de6-87eb-490b-a614-82c6063752f9","Type":"ContainerStarted","Data":"49feef71e383b209fd01ed7fb9a9b04abafd060ca3004b0ea6c2af3f6c543074"} Feb 19 20:38:39 crc kubenswrapper[4787]: I0219 20:38:39.263738 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:38:39 crc kubenswrapper[4787]: I0219 20:38:39.264067 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:38:39 crc kubenswrapper[4787]: I0219 20:38:39.683415 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"182e6de6-87eb-490b-a614-82c6063752f9","Type":"ContainerStarted","Data":"fced1b9a7ae007fa16d8222316e488b2e5b826c1fe5ece2293d801bd5c2d4f89"} Feb 19 20:38:39 crc kubenswrapper[4787]: I0219 20:38:39.684715 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 20:38:39 crc kubenswrapper[4787]: I0219 20:38:39.710646 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.217478902 podStartE2EDuration="7.710588939s" podCreationTimestamp="2026-02-19 20:38:32 +0000 UTC" firstStartedPulling="2026-02-19 20:38:33.963296533 +0000 UTC m=+4781.753962475" lastFinishedPulling="2026-02-19 20:38:38.45640657 +0000 UTC m=+4786.247072512" observedRunningTime="2026-02-19 20:38:39.707671886 +0000 UTC m=+4787.498337828" watchObservedRunningTime="2026-02-19 20:38:39.710588939 +0000 UTC m=+4787.501254871" Feb 19 20:39:03 crc kubenswrapper[4787]: I0219 20:39:03.351937 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 20:39:09 crc kubenswrapper[4787]: I0219 20:39:09.263193 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:39:09 crc kubenswrapper[4787]: I0219 20:39:09.263965 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:39:09 crc kubenswrapper[4787]: I0219 20:39:09.264033 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:39:09 crc kubenswrapper[4787]: I0219 20:39:09.265188 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"989356fdb3d601c723661a759f744160fea15ac5712a087d10ed1720f45de4af"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:39:09 crc kubenswrapper[4787]: I0219 20:39:09.265260 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://989356fdb3d601c723661a759f744160fea15ac5712a087d10ed1720f45de4af" gracePeriod=600 Feb 19 20:39:10 crc kubenswrapper[4787]: I0219 20:39:10.050176 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="989356fdb3d601c723661a759f744160fea15ac5712a087d10ed1720f45de4af" exitCode=0 Feb 19 20:39:10 crc kubenswrapper[4787]: I0219 20:39:10.050268 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"989356fdb3d601c723661a759f744160fea15ac5712a087d10ed1720f45de4af"} Feb 19 20:39:10 crc kubenswrapper[4787]: I0219 20:39:10.050599 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9"} Feb 19 20:39:10 crc kubenswrapper[4787]: I0219 20:39:10.050651 4787 scope.go:117] "RemoveContainer" containerID="15cd6c8816fcb1c0534a5fa3a28df6d6304b1195c3faafc84b29472f99b963e7" Feb 19 20:41:09 crc kubenswrapper[4787]: I0219 20:41:09.263449 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:41:09 crc kubenswrapper[4787]: I0219 20:41:09.264414 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:41:39 crc kubenswrapper[4787]: I0219 20:41:39.264016 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:41:39 crc kubenswrapper[4787]: I0219 20:41:39.264563 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:42:09 crc kubenswrapper[4787]: I0219 20:42:09.263815 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:42:09 crc kubenswrapper[4787]: I0219 20:42:09.264364 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:42:09 crc kubenswrapper[4787]: I0219 20:42:09.264417 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:42:09 crc kubenswrapper[4787]: I0219 20:42:09.265110 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:42:09 crc kubenswrapper[4787]: I0219 20:42:09.265162 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" gracePeriod=600 Feb 19 20:42:09 crc kubenswrapper[4787]: E0219 20:42:09.906328 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:42:10 crc kubenswrapper[4787]: I0219 20:42:10.060228 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" exitCode=0 Feb 19 20:42:10 crc kubenswrapper[4787]: I0219 20:42:10.060276 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9"} Feb 19 20:42:10 crc kubenswrapper[4787]: I0219 20:42:10.060337 4787 scope.go:117] "RemoveContainer" containerID="989356fdb3d601c723661a759f744160fea15ac5712a087d10ed1720f45de4af" Feb 19 20:42:10 crc kubenswrapper[4787]: I0219 20:42:10.061646 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:42:10 crc kubenswrapper[4787]: E0219 20:42:10.062204 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:42:22 crc kubenswrapper[4787]: I0219 20:42:22.904675 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:42:22 crc kubenswrapper[4787]: E0219 20:42:22.905645 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:42:35 crc kubenswrapper[4787]: I0219 20:42:35.892021 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:42:35 crc kubenswrapper[4787]: E0219 20:42:35.893853 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:42:48 crc kubenswrapper[4787]: I0219 20:42:48.892600 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:42:48 crc kubenswrapper[4787]: E0219 20:42:48.893372 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:42:59 crc kubenswrapper[4787]: I0219 20:42:59.891566 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:42:59 crc kubenswrapper[4787]: E0219 20:42:59.892343 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:43:11 crc kubenswrapper[4787]: I0219 20:43:11.891847 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:43:11 crc kubenswrapper[4787]: E0219 20:43:11.892705 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:43:25 crc kubenswrapper[4787]: I0219 20:43:25.892687 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:43:25 crc kubenswrapper[4787]: E0219 20:43:25.893745 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:43:36 crc kubenswrapper[4787]: I0219 20:43:36.893169 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:43:36 crc kubenswrapper[4787]: E0219 20:43:36.894052 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:43:50 crc kubenswrapper[4787]: I0219 20:43:50.892474 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:43:50 crc kubenswrapper[4787]: E0219 20:43:50.893373 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:44:03 crc kubenswrapper[4787]: I0219 20:44:03.892123 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:44:03 crc kubenswrapper[4787]: E0219 20:44:03.894504 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:44:16 crc kubenswrapper[4787]: I0219 20:44:16.891911 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:44:16 crc kubenswrapper[4787]: E0219 20:44:16.892860 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:44:31 crc kubenswrapper[4787]: I0219 20:44:31.891463 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:44:31 crc kubenswrapper[4787]: E0219 20:44:31.892728 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:44:42 crc kubenswrapper[4787]: I0219 20:44:42.899676 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:44:42 crc kubenswrapper[4787]: E0219 20:44:42.900436 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:44:56 crc kubenswrapper[4787]: I0219 20:44:56.892874 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:44:56 crc kubenswrapper[4787]: E0219 20:44:56.893786 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.218783 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m"] Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.222125 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.226741 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.229133 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.231845 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m"] Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.247262 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5s5w\" (UniqueName: \"kubernetes.io/projected/23fb2016-93c4-4bc8-9c1d-10580af01dc7-kube-api-access-w5s5w\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.247409 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23fb2016-93c4-4bc8-9c1d-10580af01dc7-secret-volume\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.247448 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23fb2016-93c4-4bc8-9c1d-10580af01dc7-config-volume\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.350198 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23fb2016-93c4-4bc8-9c1d-10580af01dc7-secret-volume\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.350273 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23fb2016-93c4-4bc8-9c1d-10580af01dc7-config-volume\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.350470 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5s5w\" (UniqueName: \"kubernetes.io/projected/23fb2016-93c4-4bc8-9c1d-10580af01dc7-kube-api-access-w5s5w\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.351278 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23fb2016-93c4-4bc8-9c1d-10580af01dc7-config-volume\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.359537 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23fb2016-93c4-4bc8-9c1d-10580af01dc7-secret-volume\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.367429 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5s5w\" (UniqueName: \"kubernetes.io/projected/23fb2016-93c4-4bc8-9c1d-10580af01dc7-kube-api-access-w5s5w\") pod \"collect-profiles-29525565-4c29m\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:00 crc kubenswrapper[4787]: I0219 20:45:00.544529 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:01 crc kubenswrapper[4787]: W0219 20:45:01.251406 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23fb2016_93c4_4bc8_9c1d_10580af01dc7.slice/crio-634665447bc502f9909054a2067b5130cf70a8faf76db6caddac49a3a0789be5 WatchSource:0}: Error finding container 634665447bc502f9909054a2067b5130cf70a8faf76db6caddac49a3a0789be5: Status 404 returned error can't find the container with id 634665447bc502f9909054a2067b5130cf70a8faf76db6caddac49a3a0789be5 Feb 19 20:45:01 crc kubenswrapper[4787]: I0219 20:45:01.256272 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m"] Feb 19 20:45:02 crc kubenswrapper[4787]: I0219 20:45:02.037275 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" event={"ID":"23fb2016-93c4-4bc8-9c1d-10580af01dc7","Type":"ContainerStarted","Data":"41420ba449eacdb68272ceb536843c195e7e9f42f53c5e09a6e41a87a689004d"} Feb 19 20:45:02 crc kubenswrapper[4787]: I0219 20:45:02.038039 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" event={"ID":"23fb2016-93c4-4bc8-9c1d-10580af01dc7","Type":"ContainerStarted","Data":"634665447bc502f9909054a2067b5130cf70a8faf76db6caddac49a3a0789be5"} Feb 19 20:45:02 crc kubenswrapper[4787]: E0219 20:45:02.289949 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23fb2016_93c4_4bc8_9c1d_10580af01dc7.slice/crio-41420ba449eacdb68272ceb536843c195e7e9f42f53c5e09a6e41a87a689004d.scope\": RecentStats: unable to find data in memory cache]" Feb 19 20:45:03 crc kubenswrapper[4787]: I0219 20:45:03.063352 4787 generic.go:334] "Generic (PLEG): container finished" podID="23fb2016-93c4-4bc8-9c1d-10580af01dc7" containerID="41420ba449eacdb68272ceb536843c195e7e9f42f53c5e09a6e41a87a689004d" exitCode=0 Feb 19 20:45:03 crc kubenswrapper[4787]: I0219 20:45:03.063721 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" event={"ID":"23fb2016-93c4-4bc8-9c1d-10580af01dc7","Type":"ContainerDied","Data":"41420ba449eacdb68272ceb536843c195e7e9f42f53c5e09a6e41a87a689004d"} Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.545734 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.675841 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23fb2016-93c4-4bc8-9c1d-10580af01dc7-secret-volume\") pod \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.676228 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5s5w\" (UniqueName: \"kubernetes.io/projected/23fb2016-93c4-4bc8-9c1d-10580af01dc7-kube-api-access-w5s5w\") pod \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.676269 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23fb2016-93c4-4bc8-9c1d-10580af01dc7-config-volume\") pod \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\" (UID: \"23fb2016-93c4-4bc8-9c1d-10580af01dc7\") " Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.676861 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23fb2016-93c4-4bc8-9c1d-10580af01dc7-config-volume" (OuterVolumeSpecName: "config-volume") pod "23fb2016-93c4-4bc8-9c1d-10580af01dc7" (UID: "23fb2016-93c4-4bc8-9c1d-10580af01dc7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.677276 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23fb2016-93c4-4bc8-9c1d-10580af01dc7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.694915 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23fb2016-93c4-4bc8-9c1d-10580af01dc7-kube-api-access-w5s5w" (OuterVolumeSpecName: "kube-api-access-w5s5w") pod "23fb2016-93c4-4bc8-9c1d-10580af01dc7" (UID: "23fb2016-93c4-4bc8-9c1d-10580af01dc7"). InnerVolumeSpecName "kube-api-access-w5s5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.694954 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23fb2016-93c4-4bc8-9c1d-10580af01dc7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "23fb2016-93c4-4bc8-9c1d-10580af01dc7" (UID: "23fb2016-93c4-4bc8-9c1d-10580af01dc7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.779564 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23fb2016-93c4-4bc8-9c1d-10580af01dc7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:04 crc kubenswrapper[4787]: I0219 20:45:04.779933 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5s5w\" (UniqueName: \"kubernetes.io/projected/23fb2016-93c4-4bc8-9c1d-10580af01dc7-kube-api-access-w5s5w\") on node \"crc\" DevicePath \"\"" Feb 19 20:45:05 crc kubenswrapper[4787]: I0219 20:45:05.087310 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" event={"ID":"23fb2016-93c4-4bc8-9c1d-10580af01dc7","Type":"ContainerDied","Data":"634665447bc502f9909054a2067b5130cf70a8faf76db6caddac49a3a0789be5"} Feb 19 20:45:05 crc kubenswrapper[4787]: I0219 20:45:05.087353 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="634665447bc502f9909054a2067b5130cf70a8faf76db6caddac49a3a0789be5" Feb 19 20:45:05 crc kubenswrapper[4787]: I0219 20:45:05.087390 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525565-4c29m" Feb 19 20:45:05 crc kubenswrapper[4787]: I0219 20:45:05.633512 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r"] Feb 19 20:45:05 crc kubenswrapper[4787]: I0219 20:45:05.643426 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525520-rks2r"] Feb 19 20:45:06 crc kubenswrapper[4787]: I0219 20:45:06.905711 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6296235e-0968-4e57-9f65-2b17d626a241" path="/var/lib/kubelet/pods/6296235e-0968-4e57-9f65-2b17d626a241/volumes" Feb 19 20:45:08 crc kubenswrapper[4787]: I0219 20:45:08.892709 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:45:08 crc kubenswrapper[4787]: E0219 20:45:08.893319 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:45:10 crc kubenswrapper[4787]: I0219 20:45:10.929193 4787 scope.go:117] "RemoveContainer" containerID="9c9fcf288f186da605839b59928711fb1ca3c9a0b4b8771b8d602f544c80d976" Feb 19 20:45:19 crc kubenswrapper[4787]: I0219 20:45:19.892790 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:45:19 crc kubenswrapper[4787]: E0219 20:45:19.893493 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:45:33 crc kubenswrapper[4787]: I0219 20:45:33.892409 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:45:33 crc kubenswrapper[4787]: E0219 20:45:33.893435 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:45:46 crc kubenswrapper[4787]: I0219 20:45:46.891916 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:45:46 crc kubenswrapper[4787]: E0219 20:45:46.892962 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:46:00 crc kubenswrapper[4787]: I0219 20:46:00.892731 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:46:00 crc kubenswrapper[4787]: E0219 20:46:00.895176 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:46:13 crc kubenswrapper[4787]: I0219 20:46:13.892723 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:46:13 crc kubenswrapper[4787]: E0219 20:46:13.893679 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:46:25 crc kubenswrapper[4787]: I0219 20:46:25.891457 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:46:25 crc kubenswrapper[4787]: E0219 20:46:25.892202 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:46:38 crc kubenswrapper[4787]: I0219 20:46:38.891993 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:46:38 crc kubenswrapper[4787]: E0219 20:46:38.893233 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:46:53 crc kubenswrapper[4787]: I0219 20:46:53.893994 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:46:53 crc kubenswrapper[4787]: E0219 20:46:53.895852 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.532972 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lv8n"] Feb 19 20:46:55 crc kubenswrapper[4787]: E0219 20:46:55.534930 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23fb2016-93c4-4bc8-9c1d-10580af01dc7" containerName="collect-profiles" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.535023 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="23fb2016-93c4-4bc8-9c1d-10580af01dc7" containerName="collect-profiles" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.535347 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="23fb2016-93c4-4bc8-9c1d-10580af01dc7" containerName="collect-profiles" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.537510 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.557483 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lv8n"] Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.624397 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-catalog-content\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.624492 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-utilities\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.624532 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb5fp\" (UniqueName: \"kubernetes.io/projected/9f7b8bc6-5224-434a-980c-540d0eaa7376-kube-api-access-mb5fp\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.726128 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-catalog-content\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.726217 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-utilities\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.726253 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb5fp\" (UniqueName: \"kubernetes.io/projected/9f7b8bc6-5224-434a-980c-540d0eaa7376-kube-api-access-mb5fp\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.726801 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-catalog-content\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.727280 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-utilities\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.747214 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb5fp\" (UniqueName: \"kubernetes.io/projected/9f7b8bc6-5224-434a-980c-540d0eaa7376-kube-api-access-mb5fp\") pod \"redhat-operators-6lv8n\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:55 crc kubenswrapper[4787]: I0219 20:46:55.858729 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:46:56 crc kubenswrapper[4787]: I0219 20:46:56.499074 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lv8n"] Feb 19 20:46:56 crc kubenswrapper[4787]: W0219 20:46:56.543426 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f7b8bc6_5224_434a_980c_540d0eaa7376.slice/crio-7abc4dcfd00cbeadbc7edca30dd0b5cb4b1034c1ece192e9b3c1ac1212cac499 WatchSource:0}: Error finding container 7abc4dcfd00cbeadbc7edca30dd0b5cb4b1034c1ece192e9b3c1ac1212cac499: Status 404 returned error can't find the container with id 7abc4dcfd00cbeadbc7edca30dd0b5cb4b1034c1ece192e9b3c1ac1212cac499 Feb 19 20:46:57 crc kubenswrapper[4787]: I0219 20:46:57.495588 4787 generic.go:334] "Generic (PLEG): container finished" podID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerID="a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf" exitCode=0 Feb 19 20:46:57 crc kubenswrapper[4787]: I0219 20:46:57.495756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerDied","Data":"a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf"} Feb 19 20:46:57 crc kubenswrapper[4787]: I0219 20:46:57.496334 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerStarted","Data":"7abc4dcfd00cbeadbc7edca30dd0b5cb4b1034c1ece192e9b3c1ac1212cac499"} Feb 19 20:46:57 crc kubenswrapper[4787]: I0219 20:46:57.508126 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:46:59 crc kubenswrapper[4787]: I0219 20:46:59.527979 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerStarted","Data":"576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa"} Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.407216 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ljpzt"] Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.410444 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.441571 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljpzt"] Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.462452 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-catalog-content\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.462506 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-utilities\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.462936 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5xx7\" (UniqueName: \"kubernetes.io/projected/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-kube-api-access-h5xx7\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.564903 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5xx7\" (UniqueName: \"kubernetes.io/projected/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-kube-api-access-h5xx7\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.565492 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-catalog-content\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.565530 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-utilities\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.566262 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-catalog-content\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.567054 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-utilities\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.651891 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5xx7\" (UniqueName: \"kubernetes.io/projected/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-kube-api-access-h5xx7\") pod \"redhat-marketplace-ljpzt\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:03 crc kubenswrapper[4787]: I0219 20:47:03.735760 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:04 crc kubenswrapper[4787]: W0219 20:47:04.565007 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc559fb3f_1a69_4c68_b415_ba88d0dfc4a2.slice/crio-2dbbcbe90a13209ee3405571183a03cbaca6cb598ef2a7f52c27de77ffd76814 WatchSource:0}: Error finding container 2dbbcbe90a13209ee3405571183a03cbaca6cb598ef2a7f52c27de77ffd76814: Status 404 returned error can't find the container with id 2dbbcbe90a13209ee3405571183a03cbaca6cb598ef2a7f52c27de77ffd76814 Feb 19 20:47:04 crc kubenswrapper[4787]: I0219 20:47:04.568025 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljpzt"] Feb 19 20:47:04 crc kubenswrapper[4787]: I0219 20:47:04.586743 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerStarted","Data":"2dbbcbe90a13209ee3405571183a03cbaca6cb598ef2a7f52c27de77ffd76814"} Feb 19 20:47:04 crc kubenswrapper[4787]: I0219 20:47:04.590199 4787 generic.go:334] "Generic (PLEG): container finished" podID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerID="576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa" exitCode=0 Feb 19 20:47:04 crc kubenswrapper[4787]: I0219 20:47:04.590247 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerDied","Data":"576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa"} Feb 19 20:47:05 crc kubenswrapper[4787]: I0219 20:47:05.601427 4787 generic.go:334] "Generic (PLEG): container finished" podID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerID="5f921f34459df8c9a880c05f6d4359d6c6104795ab27eac9ebf1823f694b67a8" exitCode=0 Feb 19 20:47:05 crc kubenswrapper[4787]: I0219 20:47:05.601599 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerDied","Data":"5f921f34459df8c9a880c05f6d4359d6c6104795ab27eac9ebf1823f694b67a8"} Feb 19 20:47:06 crc kubenswrapper[4787]: I0219 20:47:06.618291 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerStarted","Data":"6d0504c3db97b4e416ad32fa51aa2e2d80212857e721f726a0160e894fdc5676"} Feb 19 20:47:06 crc kubenswrapper[4787]: I0219 20:47:06.626094 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerStarted","Data":"a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00"} Feb 19 20:47:06 crc kubenswrapper[4787]: I0219 20:47:06.665724 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lv8n" podStartSLOduration=4.045047918 podStartE2EDuration="11.665704785s" podCreationTimestamp="2026-02-19 20:46:55 +0000 UTC" firstStartedPulling="2026-02-19 20:46:57.50049666 +0000 UTC m=+5285.291162612" lastFinishedPulling="2026-02-19 20:47:05.121153527 +0000 UTC m=+5292.911819479" observedRunningTime="2026-02-19 20:47:06.661357901 +0000 UTC m=+5294.452023843" watchObservedRunningTime="2026-02-19 20:47:06.665704785 +0000 UTC m=+5294.456370727" Feb 19 20:47:07 crc kubenswrapper[4787]: I0219 20:47:07.892068 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:47:07 crc kubenswrapper[4787]: E0219 20:47:07.892662 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:47:08 crc kubenswrapper[4787]: I0219 20:47:08.648466 4787 generic.go:334] "Generic (PLEG): container finished" podID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerID="6d0504c3db97b4e416ad32fa51aa2e2d80212857e721f726a0160e894fdc5676" exitCode=0 Feb 19 20:47:08 crc kubenswrapper[4787]: I0219 20:47:08.648706 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerDied","Data":"6d0504c3db97b4e416ad32fa51aa2e2d80212857e721f726a0160e894fdc5676"} Feb 19 20:47:09 crc kubenswrapper[4787]: I0219 20:47:09.663155 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerStarted","Data":"015c825c2d05f718b925c150326dd7b4decb3b2e7849055077c296ce12b22857"} Feb 19 20:47:09 crc kubenswrapper[4787]: I0219 20:47:09.695230 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ljpzt" podStartSLOduration=3.247532258 podStartE2EDuration="6.695211797s" podCreationTimestamp="2026-02-19 20:47:03 +0000 UTC" firstStartedPulling="2026-02-19 20:47:05.604250088 +0000 UTC m=+5293.394916030" lastFinishedPulling="2026-02-19 20:47:09.051929627 +0000 UTC m=+5296.842595569" observedRunningTime="2026-02-19 20:47:09.682188427 +0000 UTC m=+5297.472854369" watchObservedRunningTime="2026-02-19 20:47:09.695211797 +0000 UTC m=+5297.485877729" Feb 19 20:47:13 crc kubenswrapper[4787]: I0219 20:47:13.736720 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:13 crc kubenswrapper[4787]: I0219 20:47:13.737166 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:14 crc kubenswrapper[4787]: I0219 20:47:14.785468 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-ljpzt" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="registry-server" probeResult="failure" output=< Feb 19 20:47:14 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:47:14 crc kubenswrapper[4787]: > Feb 19 20:47:15 crc kubenswrapper[4787]: I0219 20:47:15.859013 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:47:15 crc kubenswrapper[4787]: I0219 20:47:15.859360 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:47:16 crc kubenswrapper[4787]: I0219 20:47:16.917878 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lv8n" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" probeResult="failure" output=< Feb 19 20:47:16 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:47:16 crc kubenswrapper[4787]: > Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.336809 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b58cl"] Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.341662 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.354452 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b58cl"] Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.420816 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9454\" (UniqueName: \"kubernetes.io/projected/402fe83a-6ac0-49f0-8002-16b473ccab19-kube-api-access-t9454\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.420913 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-catalog-content\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.421087 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-utilities\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.523074 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9454\" (UniqueName: \"kubernetes.io/projected/402fe83a-6ac0-49f0-8002-16b473ccab19-kube-api-access-t9454\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.523205 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-catalog-content\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.523418 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-utilities\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.523908 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-catalog-content\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.523932 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-utilities\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.548043 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9454\" (UniqueName: \"kubernetes.io/projected/402fe83a-6ac0-49f0-8002-16b473ccab19-kube-api-access-t9454\") pod \"certified-operators-b58cl\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:17 crc kubenswrapper[4787]: I0219 20:47:17.713490 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:18 crc kubenswrapper[4787]: I0219 20:47:18.189766 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b58cl"] Feb 19 20:47:18 crc kubenswrapper[4787]: I0219 20:47:18.757098 4787 generic.go:334] "Generic (PLEG): container finished" podID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerID="d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210" exitCode=0 Feb 19 20:47:18 crc kubenswrapper[4787]: I0219 20:47:18.757136 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerDied","Data":"d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210"} Feb 19 20:47:18 crc kubenswrapper[4787]: I0219 20:47:18.757181 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerStarted","Data":"ca0f54360eeb7d1a3a9cc371c397bdd4afdd4306fb0c6e0b255b653e779d1af7"} Feb 19 20:47:19 crc kubenswrapper[4787]: I0219 20:47:19.770917 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerStarted","Data":"5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8"} Feb 19 20:47:19 crc kubenswrapper[4787]: I0219 20:47:19.892920 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:47:20 crc kubenswrapper[4787]: I0219 20:47:20.801589 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"55355501af7cce90cd3e9d997bd7966a0e7d3f6b53e7334e1a4f61c4379fa62d"} Feb 19 20:47:21 crc kubenswrapper[4787]: I0219 20:47:21.816358 4787 generic.go:334] "Generic (PLEG): container finished" podID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerID="5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8" exitCode=0 Feb 19 20:47:21 crc kubenswrapper[4787]: I0219 20:47:21.816408 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerDied","Data":"5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8"} Feb 19 20:47:22 crc kubenswrapper[4787]: I0219 20:47:22.829224 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerStarted","Data":"8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2"} Feb 19 20:47:22 crc kubenswrapper[4787]: I0219 20:47:22.859647 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b58cl" podStartSLOduration=2.337333368 podStartE2EDuration="5.859598766s" podCreationTimestamp="2026-02-19 20:47:17 +0000 UTC" firstStartedPulling="2026-02-19 20:47:18.759294662 +0000 UTC m=+5306.549960604" lastFinishedPulling="2026-02-19 20:47:22.28156006 +0000 UTC m=+5310.072226002" observedRunningTime="2026-02-19 20:47:22.857864247 +0000 UTC m=+5310.648530199" watchObservedRunningTime="2026-02-19 20:47:22.859598766 +0000 UTC m=+5310.650264708" Feb 19 20:47:23 crc kubenswrapper[4787]: I0219 20:47:23.788084 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:23 crc kubenswrapper[4787]: I0219 20:47:23.846928 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.311646 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jgfs"] Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.314587 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.361646 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jgfs"] Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.416380 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-utilities\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.416444 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-catalog-content\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.416671 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4sfc\" (UniqueName: \"kubernetes.io/projected/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-kube-api-access-p4sfc\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.518866 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-utilities\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.518917 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-catalog-content\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.518998 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4sfc\" (UniqueName: \"kubernetes.io/projected/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-kube-api-access-p4sfc\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.520238 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-utilities\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.520242 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-catalog-content\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.547668 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4sfc\" (UniqueName: \"kubernetes.io/projected/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-kube-api-access-p4sfc\") pod \"community-operators-6jgfs\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:24 crc kubenswrapper[4787]: I0219 20:47:24.647570 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:25 crc kubenswrapper[4787]: I0219 20:47:25.192407 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jgfs"] Feb 19 20:47:25 crc kubenswrapper[4787]: W0219 20:47:25.193453 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfd6c78b_285c_430e_89fb_ea84faf4f2d4.slice/crio-a312193743e025bd4840940332f082a8524ec5086246da15c9f76bced11ac065 WatchSource:0}: Error finding container a312193743e025bd4840940332f082a8524ec5086246da15c9f76bced11ac065: Status 404 returned error can't find the container with id a312193743e025bd4840940332f082a8524ec5086246da15c9f76bced11ac065 Feb 19 20:47:25 crc kubenswrapper[4787]: I0219 20:47:25.864164 4787 generic.go:334] "Generic (PLEG): container finished" podID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerID="23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d" exitCode=0 Feb 19 20:47:25 crc kubenswrapper[4787]: I0219 20:47:25.864269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerDied","Data":"23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d"} Feb 19 20:47:25 crc kubenswrapper[4787]: I0219 20:47:25.864477 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerStarted","Data":"a312193743e025bd4840940332f082a8524ec5086246da15c9f76bced11ac065"} Feb 19 20:47:26 crc kubenswrapper[4787]: I0219 20:47:26.708767 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljpzt"] Feb 19 20:47:26 crc kubenswrapper[4787]: I0219 20:47:26.711813 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ljpzt" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="registry-server" containerID="cri-o://015c825c2d05f718b925c150326dd7b4decb3b2e7849055077c296ce12b22857" gracePeriod=2 Feb 19 20:47:26 crc kubenswrapper[4787]: I0219 20:47:26.883530 4787 generic.go:334] "Generic (PLEG): container finished" podID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerID="015c825c2d05f718b925c150326dd7b4decb3b2e7849055077c296ce12b22857" exitCode=0 Feb 19 20:47:26 crc kubenswrapper[4787]: I0219 20:47:26.883595 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerDied","Data":"015c825c2d05f718b925c150326dd7b4decb3b2e7849055077c296ce12b22857"} Feb 19 20:47:26 crc kubenswrapper[4787]: I0219 20:47:26.886556 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerStarted","Data":"6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60"} Feb 19 20:47:26 crc kubenswrapper[4787]: I0219 20:47:26.913916 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lv8n" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" probeResult="failure" output=< Feb 19 20:47:26 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:47:26 crc kubenswrapper[4787]: > Feb 19 20:47:27 crc kubenswrapper[4787]: I0219 20:47:27.714476 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:27 crc kubenswrapper[4787]: I0219 20:47:27.715148 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:27 crc kubenswrapper[4787]: I0219 20:47:27.913249 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:27 crc kubenswrapper[4787]: I0219 20:47:27.917042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ljpzt" event={"ID":"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2","Type":"ContainerDied","Data":"2dbbcbe90a13209ee3405571183a03cbaca6cb598ef2a7f52c27de77ffd76814"} Feb 19 20:47:27 crc kubenswrapper[4787]: I0219 20:47:27.917088 4787 scope.go:117] "RemoveContainer" containerID="015c825c2d05f718b925c150326dd7b4decb3b2e7849055077c296ce12b22857" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.007097 4787 scope.go:117] "RemoveContainer" containerID="6d0504c3db97b4e416ad32fa51aa2e2d80212857e721f726a0160e894fdc5676" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.043669 4787 scope.go:117] "RemoveContainer" containerID="5f921f34459df8c9a880c05f6d4359d6c6104795ab27eac9ebf1823f694b67a8" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.105698 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-utilities\") pod \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.106068 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-catalog-content\") pod \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.106161 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5xx7\" (UniqueName: \"kubernetes.io/projected/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-kube-api-access-h5xx7\") pod \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\" (UID: \"c559fb3f-1a69-4c68-b415-ba88d0dfc4a2\") " Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.108171 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-utilities" (OuterVolumeSpecName: "utilities") pod "c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" (UID: "c559fb3f-1a69-4c68-b415-ba88d0dfc4a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.114823 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-kube-api-access-h5xx7" (OuterVolumeSpecName: "kube-api-access-h5xx7") pod "c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" (UID: "c559fb3f-1a69-4c68-b415-ba88d0dfc4a2"). InnerVolumeSpecName "kube-api-access-h5xx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.123735 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" (UID: "c559fb3f-1a69-4c68-b415-ba88d0dfc4a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.209028 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.209063 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.209074 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5xx7\" (UniqueName: \"kubernetes.io/projected/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2-kube-api-access-h5xx7\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.927867 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ljpzt" Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.931689 4787 generic.go:334] "Generic (PLEG): container finished" podID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerID="6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60" exitCode=0 Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.931744 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerDied","Data":"6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60"} Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.973562 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljpzt"] Feb 19 20:47:28 crc kubenswrapper[4787]: I0219 20:47:28.986088 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ljpzt"] Feb 19 20:47:29 crc kubenswrapper[4787]: I0219 20:47:29.004898 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-b58cl" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="registry-server" probeResult="failure" output=< Feb 19 20:47:29 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:47:29 crc kubenswrapper[4787]: > Feb 19 20:47:30 crc kubenswrapper[4787]: I0219 20:47:30.905797 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" path="/var/lib/kubelet/pods/c559fb3f-1a69-4c68-b415-ba88d0dfc4a2/volumes" Feb 19 20:47:30 crc kubenswrapper[4787]: I0219 20:47:30.953914 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerStarted","Data":"206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb"} Feb 19 20:47:30 crc kubenswrapper[4787]: I0219 20:47:30.981445 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jgfs" podStartSLOduration=3.476236326 podStartE2EDuration="6.981418559s" podCreationTimestamp="2026-02-19 20:47:24 +0000 UTC" firstStartedPulling="2026-02-19 20:47:25.866542369 +0000 UTC m=+5313.657208301" lastFinishedPulling="2026-02-19 20:47:29.371724592 +0000 UTC m=+5317.162390534" observedRunningTime="2026-02-19 20:47:30.96914953 +0000 UTC m=+5318.759815482" watchObservedRunningTime="2026-02-19 20:47:30.981418559 +0000 UTC m=+5318.772084501" Feb 19 20:47:34 crc kubenswrapper[4787]: I0219 20:47:34.647978 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:34 crc kubenswrapper[4787]: I0219 20:47:34.648596 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:34 crc kubenswrapper[4787]: I0219 20:47:34.702053 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:35 crc kubenswrapper[4787]: I0219 20:47:35.051135 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:36 crc kubenswrapper[4787]: I0219 20:47:36.110453 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jgfs"] Feb 19 20:47:36 crc kubenswrapper[4787]: I0219 20:47:36.920808 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lv8n" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" probeResult="failure" output=< Feb 19 20:47:36 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:47:36 crc kubenswrapper[4787]: > Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.022069 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6jgfs" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="registry-server" containerID="cri-o://206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb" gracePeriod=2 Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.735886 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.765048 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.819738 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.896321 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4sfc\" (UniqueName: \"kubernetes.io/projected/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-kube-api-access-p4sfc\") pod \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.896400 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-catalog-content\") pod \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.896553 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-utilities\") pod \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\" (UID: \"bfd6c78b-285c-430e-89fb-ea84faf4f2d4\") " Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.897553 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-utilities" (OuterVolumeSpecName: "utilities") pod "bfd6c78b-285c-430e-89fb-ea84faf4f2d4" (UID: "bfd6c78b-285c-430e-89fb-ea84faf4f2d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.898373 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:37 crc kubenswrapper[4787]: I0219 20:47:37.947467 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfd6c78b-285c-430e-89fb-ea84faf4f2d4" (UID: "bfd6c78b-285c-430e-89fb-ea84faf4f2d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.002712 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.037912 4787 generic.go:334] "Generic (PLEG): container finished" podID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerID="206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb" exitCode=0 Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.038012 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jgfs" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.038076 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerDied","Data":"206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb"} Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.038116 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jgfs" event={"ID":"bfd6c78b-285c-430e-89fb-ea84faf4f2d4","Type":"ContainerDied","Data":"a312193743e025bd4840940332f082a8524ec5086246da15c9f76bced11ac065"} Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.038143 4787 scope.go:117] "RemoveContainer" containerID="206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.082449 4787 scope.go:117] "RemoveContainer" containerID="6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.446352 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-kube-api-access-p4sfc" (OuterVolumeSpecName: "kube-api-access-p4sfc") pod "bfd6c78b-285c-430e-89fb-ea84faf4f2d4" (UID: "bfd6c78b-285c-430e-89fb-ea84faf4f2d4"). InnerVolumeSpecName "kube-api-access-p4sfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.463707 4787 scope.go:117] "RemoveContainer" containerID="23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.520148 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4sfc\" (UniqueName: \"kubernetes.io/projected/bfd6c78b-285c-430e-89fb-ea84faf4f2d4-kube-api-access-p4sfc\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.595394 4787 scope.go:117] "RemoveContainer" containerID="206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb" Feb 19 20:47:38 crc kubenswrapper[4787]: E0219 20:47:38.598404 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb\": container with ID starting with 206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb not found: ID does not exist" containerID="206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.598463 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb"} err="failed to get container status \"206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb\": rpc error: code = NotFound desc = could not find container \"206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb\": container with ID starting with 206fa11fd5724dcb72f4200af57348a8a0e23635bf50db1da05f711a47bab9bb not found: ID does not exist" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.598494 4787 scope.go:117] "RemoveContainer" containerID="6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60" Feb 19 20:47:38 crc kubenswrapper[4787]: E0219 20:47:38.598934 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60\": container with ID starting with 6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60 not found: ID does not exist" containerID="6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.598954 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60"} err="failed to get container status \"6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60\": rpc error: code = NotFound desc = could not find container \"6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60\": container with ID starting with 6ed1720bdb590c948947344757cdcff2bec9c767d55d8ba4630ffe00861d0d60 not found: ID does not exist" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.598968 4787 scope.go:117] "RemoveContainer" containerID="23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d" Feb 19 20:47:38 crc kubenswrapper[4787]: E0219 20:47:38.599336 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d\": container with ID starting with 23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d not found: ID does not exist" containerID="23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.599365 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d"} err="failed to get container status \"23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d\": rpc error: code = NotFound desc = could not find container \"23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d\": container with ID starting with 23c6fbaa289f524fe1aa022de4f524aa46c5278d2d192d3bb6d650f88d37e82d not found: ID does not exist" Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.681938 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jgfs"] Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.694878 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6jgfs"] Feb 19 20:47:38 crc kubenswrapper[4787]: I0219 20:47:38.905537 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" path="/var/lib/kubelet/pods/bfd6c78b-285c-430e-89fb-ea84faf4f2d4/volumes" Feb 19 20:47:40 crc kubenswrapper[4787]: I0219 20:47:40.103650 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b58cl"] Feb 19 20:47:40 crc kubenswrapper[4787]: I0219 20:47:40.104465 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b58cl" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="registry-server" containerID="cri-o://8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2" gracePeriod=2 Feb 19 20:47:40 crc kubenswrapper[4787]: I0219 20:47:40.993902 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.083232 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-catalog-content\") pod \"402fe83a-6ac0-49f0-8002-16b473ccab19\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.083344 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9454\" (UniqueName: \"kubernetes.io/projected/402fe83a-6ac0-49f0-8002-16b473ccab19-kube-api-access-t9454\") pod \"402fe83a-6ac0-49f0-8002-16b473ccab19\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.083920 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-utilities\") pod \"402fe83a-6ac0-49f0-8002-16b473ccab19\" (UID: \"402fe83a-6ac0-49f0-8002-16b473ccab19\") " Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.084338 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-utilities" (OuterVolumeSpecName: "utilities") pod "402fe83a-6ac0-49f0-8002-16b473ccab19" (UID: "402fe83a-6ac0-49f0-8002-16b473ccab19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.084966 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.088424 4787 generic.go:334] "Generic (PLEG): container finished" podID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerID="8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2" exitCode=0 Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.088461 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerDied","Data":"8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2"} Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.088489 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b58cl" event={"ID":"402fe83a-6ac0-49f0-8002-16b473ccab19","Type":"ContainerDied","Data":"ca0f54360eeb7d1a3a9cc371c397bdd4afdd4306fb0c6e0b255b653e779d1af7"} Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.088506 4787 scope.go:117] "RemoveContainer" containerID="8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.088648 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b58cl" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.090965 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402fe83a-6ac0-49f0-8002-16b473ccab19-kube-api-access-t9454" (OuterVolumeSpecName: "kube-api-access-t9454") pod "402fe83a-6ac0-49f0-8002-16b473ccab19" (UID: "402fe83a-6ac0-49f0-8002-16b473ccab19"). InnerVolumeSpecName "kube-api-access-t9454". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.138672 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "402fe83a-6ac0-49f0-8002-16b473ccab19" (UID: "402fe83a-6ac0-49f0-8002-16b473ccab19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.188244 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/402fe83a-6ac0-49f0-8002-16b473ccab19-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.188275 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9454\" (UniqueName: \"kubernetes.io/projected/402fe83a-6ac0-49f0-8002-16b473ccab19-kube-api-access-t9454\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.189634 4787 scope.go:117] "RemoveContainer" containerID="5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.212000 4787 scope.go:117] "RemoveContainer" containerID="d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.286429 4787 scope.go:117] "RemoveContainer" containerID="8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2" Feb 19 20:47:41 crc kubenswrapper[4787]: E0219 20:47:41.287074 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2\": container with ID starting with 8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2 not found: ID does not exist" containerID="8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.287145 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2"} err="failed to get container status \"8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2\": rpc error: code = NotFound desc = could not find container \"8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2\": container with ID starting with 8e58195df6f0fe1c8b61f763e17119f4161af334bdb6a0133d3563228a6dc1b2 not found: ID does not exist" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.287178 4787 scope.go:117] "RemoveContainer" containerID="5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8" Feb 19 20:47:41 crc kubenswrapper[4787]: E0219 20:47:41.287515 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8\": container with ID starting with 5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8 not found: ID does not exist" containerID="5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.287543 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8"} err="failed to get container status \"5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8\": rpc error: code = NotFound desc = could not find container \"5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8\": container with ID starting with 5c4d0b8df6f621c594e3880cc9301439a24325ce7d0804d5e9b4b835cb8712d8 not found: ID does not exist" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.287585 4787 scope.go:117] "RemoveContainer" containerID="d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210" Feb 19 20:47:41 crc kubenswrapper[4787]: E0219 20:47:41.287878 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210\": container with ID starting with d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210 not found: ID does not exist" containerID="d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.287914 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210"} err="failed to get container status \"d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210\": rpc error: code = NotFound desc = could not find container \"d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210\": container with ID starting with d233e402136ac3db7254e1abb828f19bbccbd2e2464fc6fe402eb09d2ade9210 not found: ID does not exist" Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.426907 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b58cl"] Feb 19 20:47:41 crc kubenswrapper[4787]: I0219 20:47:41.438424 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b58cl"] Feb 19 20:47:42 crc kubenswrapper[4787]: I0219 20:47:42.908757 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" path="/var/lib/kubelet/pods/402fe83a-6ac0-49f0-8002-16b473ccab19/volumes" Feb 19 20:47:45 crc kubenswrapper[4787]: I0219 20:47:45.922189 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:47:45 crc kubenswrapper[4787]: I0219 20:47:45.996091 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:47:46 crc kubenswrapper[4787]: I0219 20:47:46.167684 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lv8n"] Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.158409 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lv8n" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" containerID="cri-o://a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00" gracePeriod=2 Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.672096 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.837487 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb5fp\" (UniqueName: \"kubernetes.io/projected/9f7b8bc6-5224-434a-980c-540d0eaa7376-kube-api-access-mb5fp\") pod \"9f7b8bc6-5224-434a-980c-540d0eaa7376\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.838911 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-catalog-content\") pod \"9f7b8bc6-5224-434a-980c-540d0eaa7376\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.844917 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-utilities\") pod \"9f7b8bc6-5224-434a-980c-540d0eaa7376\" (UID: \"9f7b8bc6-5224-434a-980c-540d0eaa7376\") " Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.845957 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-utilities" (OuterVolumeSpecName: "utilities") pod "9f7b8bc6-5224-434a-980c-540d0eaa7376" (UID: "9f7b8bc6-5224-434a-980c-540d0eaa7376"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.846274 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.850959 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7b8bc6-5224-434a-980c-540d0eaa7376-kube-api-access-mb5fp" (OuterVolumeSpecName: "kube-api-access-mb5fp") pod "9f7b8bc6-5224-434a-980c-540d0eaa7376" (UID: "9f7b8bc6-5224-434a-980c-540d0eaa7376"). InnerVolumeSpecName "kube-api-access-mb5fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.951222 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb5fp\" (UniqueName: \"kubernetes.io/projected/9f7b8bc6-5224-434a-980c-540d0eaa7376-kube-api-access-mb5fp\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:47 crc kubenswrapper[4787]: I0219 20:47:47.980847 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f7b8bc6-5224-434a-980c-540d0eaa7376" (UID: "9f7b8bc6-5224-434a-980c-540d0eaa7376"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.053906 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7b8bc6-5224-434a-980c-540d0eaa7376-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.169988 4787 generic.go:334] "Generic (PLEG): container finished" podID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerID="a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00" exitCode=0 Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.170035 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerDied","Data":"a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00"} Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.170065 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lv8n" event={"ID":"9f7b8bc6-5224-434a-980c-540d0eaa7376","Type":"ContainerDied","Data":"7abc4dcfd00cbeadbc7edca30dd0b5cb4b1034c1ece192e9b3c1ac1212cac499"} Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.170082 4787 scope.go:117] "RemoveContainer" containerID="a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.170250 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lv8n" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.198463 4787 scope.go:117] "RemoveContainer" containerID="576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.211459 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lv8n"] Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.222208 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lv8n"] Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.227340 4787 scope.go:117] "RemoveContainer" containerID="a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.281105 4787 scope.go:117] "RemoveContainer" containerID="a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00" Feb 19 20:47:48 crc kubenswrapper[4787]: E0219 20:47:48.281731 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00\": container with ID starting with a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00 not found: ID does not exist" containerID="a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.281772 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00"} err="failed to get container status \"a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00\": rpc error: code = NotFound desc = could not find container \"a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00\": container with ID starting with a2be99d35dab31c169f67515628178fee14b44909c8bf60082bd6f6e6a1f0b00 not found: ID does not exist" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.281797 4787 scope.go:117] "RemoveContainer" containerID="576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa" Feb 19 20:47:48 crc kubenswrapper[4787]: E0219 20:47:48.282905 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa\": container with ID starting with 576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa not found: ID does not exist" containerID="576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.282950 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa"} err="failed to get container status \"576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa\": rpc error: code = NotFound desc = could not find container \"576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa\": container with ID starting with 576c741a7f46ec6f734b7dce95e260119d9c0aeae52c72371bafce68cf2b34aa not found: ID does not exist" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.282981 4787 scope.go:117] "RemoveContainer" containerID="a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf" Feb 19 20:47:48 crc kubenswrapper[4787]: E0219 20:47:48.283233 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf\": container with ID starting with a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf not found: ID does not exist" containerID="a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.283264 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf"} err="failed to get container status \"a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf\": rpc error: code = NotFound desc = could not find container \"a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf\": container with ID starting with a5e92712e36c83f704312ce6189ff5b4685c45175280c40775acc08809a78ecf not found: ID does not exist" Feb 19 20:47:48 crc kubenswrapper[4787]: I0219 20:47:48.906468 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" path="/var/lib/kubelet/pods/9f7b8bc6-5224-434a-980c-540d0eaa7376/volumes" Feb 19 20:49:39 crc kubenswrapper[4787]: I0219 20:49:39.263066 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:49:39 crc kubenswrapper[4787]: I0219 20:49:39.264017 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:50:09 crc kubenswrapper[4787]: I0219 20:50:09.263839 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:50:09 crc kubenswrapper[4787]: I0219 20:50:09.264433 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:50:39 crc kubenswrapper[4787]: I0219 20:50:39.263540 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:50:39 crc kubenswrapper[4787]: I0219 20:50:39.264669 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:50:39 crc kubenswrapper[4787]: I0219 20:50:39.264844 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:50:39 crc kubenswrapper[4787]: I0219 20:50:39.266178 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55355501af7cce90cd3e9d997bd7966a0e7d3f6b53e7334e1a4f61c4379fa62d"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:50:39 crc kubenswrapper[4787]: I0219 20:50:39.266306 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://55355501af7cce90cd3e9d997bd7966a0e7d3f6b53e7334e1a4f61c4379fa62d" gracePeriod=600 Feb 19 20:50:40 crc kubenswrapper[4787]: I0219 20:50:40.230703 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="55355501af7cce90cd3e9d997bd7966a0e7d3f6b53e7334e1a4f61c4379fa62d" exitCode=0 Feb 19 20:50:40 crc kubenswrapper[4787]: I0219 20:50:40.230782 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"55355501af7cce90cd3e9d997bd7966a0e7d3f6b53e7334e1a4f61c4379fa62d"} Feb 19 20:50:40 crc kubenswrapper[4787]: I0219 20:50:40.231290 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364"} Feb 19 20:50:40 crc kubenswrapper[4787]: I0219 20:50:40.232323 4787 scope.go:117] "RemoveContainer" containerID="07b50d6fee24baf81f59cb23bff9e182f1cf6167cd1a3c064f96c73e2669a1f9" Feb 19 20:50:52 crc kubenswrapper[4787]: I0219 20:50:52.371202 4787 generic.go:334] "Generic (PLEG): container finished" podID="3cd05e88-76fc-4a10-bc71-426177032c9f" containerID="d3c26c2fa6f8727c26bf0d43059c68c7113b29d82048b6f6ad2828bbf7aece9e" exitCode=1 Feb 19 20:50:52 crc kubenswrapper[4787]: I0219 20:50:52.371296 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3cd05e88-76fc-4a10-bc71-426177032c9f","Type":"ContainerDied","Data":"d3c26c2fa6f8727c26bf0d43059c68c7113b29d82048b6f6ad2828bbf7aece9e"} Feb 19 20:50:53 crc kubenswrapper[4787]: I0219 20:50:53.839251 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.018592 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-config-data\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.018730 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-temporary\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.018832 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ca-certs\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.018900 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config-secret\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.018943 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8kgj\" (UniqueName: \"kubernetes.io/projected/3cd05e88-76fc-4a10-bc71-426177032c9f-kube-api-access-h8kgj\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.019081 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.019128 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ssh-key\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.019184 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-workdir\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.019379 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"3cd05e88-76fc-4a10-bc71-426177032c9f\" (UID: \"3cd05e88-76fc-4a10-bc71-426177032c9f\") " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.019859 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.019972 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-config-data" (OuterVolumeSpecName: "config-data") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.020656 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.020685 4787 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.025188 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.026451 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.028301 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd05e88-76fc-4a10-bc71-426177032c9f-kube-api-access-h8kgj" (OuterVolumeSpecName: "kube-api-access-h8kgj") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "kube-api-access-h8kgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.054890 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.059737 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.062812 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.084628 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3cd05e88-76fc-4a10-bc71-426177032c9f" (UID: "3cd05e88-76fc-4a10-bc71-426177032c9f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123402 4787 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123430 4787 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123441 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123454 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8kgj\" (UniqueName: \"kubernetes.io/projected/3cd05e88-76fc-4a10-bc71-426177032c9f-kube-api-access-h8kgj\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123468 4787 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cd05e88-76fc-4a10-bc71-426177032c9f-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123478 4787 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cd05e88-76fc-4a10-bc71-426177032c9f-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.123491 4787 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3cd05e88-76fc-4a10-bc71-426177032c9f-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.173374 4787 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.225355 4787 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.397424 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3cd05e88-76fc-4a10-bc71-426177032c9f","Type":"ContainerDied","Data":"0a39cb75a1a02358210fecfe801a1581acfe973902bda63852baf376d39bddff"} Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.397468 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a39cb75a1a02358210fecfe801a1581acfe973902bda63852baf376d39bddff" Feb 19 20:50:54 crc kubenswrapper[4787]: I0219 20:50:54.397471 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.103649 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.106327 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.106510 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.106690 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.106884 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.107033 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.107164 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.107313 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.107527 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.107563 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.107577 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.107634 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.107648 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="extract-utilities" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.107675 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.107691 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.107726 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.108409 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.108446 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.108460 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.108512 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.108526 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="extract-content" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.108566 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.108581 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.108637 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.108653 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: E0219 20:51:06.108689 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd05e88-76fc-4a10-bc71-426177032c9f" containerName="tempest-tests-tempest-tests-runner" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.110314 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd05e88-76fc-4a10-bc71-426177032c9f" containerName="tempest-tests-tempest-tests-runner" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.110974 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="c559fb3f-1a69-4c68-b415-ba88d0dfc4a2" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.114813 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd6c78b-285c-430e-89fb-ea84faf4f2d4" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.115019 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7b8bc6-5224-434a-980c-540d0eaa7376" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.115210 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="402fe83a-6ac0-49f0-8002-16b473ccab19" containerName="registry-server" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.115399 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd05e88-76fc-4a10-bc71-426177032c9f" containerName="tempest-tests-tempest-tests-runner" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.116663 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.116802 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.125814 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4xfxp" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.208903 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.209123 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zfx\" (UniqueName: \"kubernetes.io/projected/f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b-kube-api-access-64zfx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.311894 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.312035 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zfx\" (UniqueName: \"kubernetes.io/projected/f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b-kube-api-access-64zfx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.314483 4787 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.341463 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zfx\" (UniqueName: \"kubernetes.io/projected/f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b-kube-api-access-64zfx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.361558 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.444886 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 20:51:06 crc kubenswrapper[4787]: I0219 20:51:06.914701 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 20:51:07 crc kubenswrapper[4787]: I0219 20:51:07.574411 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b","Type":"ContainerStarted","Data":"87ba6a7d8c4835776188d82a86ee47880725ca34469f2d28ee8e1d69e70d1ee8"} Feb 19 20:51:08 crc kubenswrapper[4787]: I0219 20:51:08.611080 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b","Type":"ContainerStarted","Data":"6b5159f035ba0573c1fcf04d58535374784e74787e64244bf7c1130c9bdb070c"} Feb 19 20:51:08 crc kubenswrapper[4787]: I0219 20:51:08.647207 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.452094207 podStartE2EDuration="2.647158068s" podCreationTimestamp="2026-02-19 20:51:06 +0000 UTC" firstStartedPulling="2026-02-19 20:51:06.923117574 +0000 UTC m=+5534.713783516" lastFinishedPulling="2026-02-19 20:51:08.118181425 +0000 UTC m=+5535.908847377" observedRunningTime="2026-02-19 20:51:08.630051412 +0000 UTC m=+5536.420717354" watchObservedRunningTime="2026-02-19 20:51:08.647158068 +0000 UTC m=+5536.437824020" Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.758878 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7mnxb/must-gather-2wq44"] Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.761405 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.763880 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7mnxb"/"default-dockercfg-xq8hb" Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.764021 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7mnxb"/"kube-root-ca.crt" Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.764174 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7mnxb"/"openshift-service-ca.crt" Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.782101 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7mnxb/must-gather-2wq44"] Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.910846 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9deab114-1731-496c-8d48-75e6c998fcfe-must-gather-output\") pod \"must-gather-2wq44\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:47 crc kubenswrapper[4787]: I0219 20:51:47.911039 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b5lb\" (UniqueName: \"kubernetes.io/projected/9deab114-1731-496c-8d48-75e6c998fcfe-kube-api-access-2b5lb\") pod \"must-gather-2wq44\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:48 crc kubenswrapper[4787]: I0219 20:51:48.013808 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9deab114-1731-496c-8d48-75e6c998fcfe-must-gather-output\") pod \"must-gather-2wq44\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:48 crc kubenswrapper[4787]: I0219 20:51:48.013901 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b5lb\" (UniqueName: \"kubernetes.io/projected/9deab114-1731-496c-8d48-75e6c998fcfe-kube-api-access-2b5lb\") pod \"must-gather-2wq44\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:48 crc kubenswrapper[4787]: I0219 20:51:48.014334 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9deab114-1731-496c-8d48-75e6c998fcfe-must-gather-output\") pod \"must-gather-2wq44\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:48 crc kubenswrapper[4787]: I0219 20:51:48.338667 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b5lb\" (UniqueName: \"kubernetes.io/projected/9deab114-1731-496c-8d48-75e6c998fcfe-kube-api-access-2b5lb\") pod \"must-gather-2wq44\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:48 crc kubenswrapper[4787]: I0219 20:51:48.381942 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:51:48 crc kubenswrapper[4787]: I0219 20:51:48.874359 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7mnxb/must-gather-2wq44"] Feb 19 20:51:49 crc kubenswrapper[4787]: I0219 20:51:49.204800 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/must-gather-2wq44" event={"ID":"9deab114-1731-496c-8d48-75e6c998fcfe","Type":"ContainerStarted","Data":"6b8201f5d5a5b7a086515dbb268dbbbaf5e8312e1d224166a333e63563d07731"} Feb 19 20:51:57 crc kubenswrapper[4787]: I0219 20:51:57.294461 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/must-gather-2wq44" event={"ID":"9deab114-1731-496c-8d48-75e6c998fcfe","Type":"ContainerStarted","Data":"376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab"} Feb 19 20:51:57 crc kubenswrapper[4787]: I0219 20:51:57.295006 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/must-gather-2wq44" event={"ID":"9deab114-1731-496c-8d48-75e6c998fcfe","Type":"ContainerStarted","Data":"5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df"} Feb 19 20:51:57 crc kubenswrapper[4787]: I0219 20:51:57.313985 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7mnxb/must-gather-2wq44" podStartSLOduration=3.323488801 podStartE2EDuration="10.313968755s" podCreationTimestamp="2026-02-19 20:51:47 +0000 UTC" firstStartedPulling="2026-02-19 20:51:48.878152051 +0000 UTC m=+5576.668817993" lastFinishedPulling="2026-02-19 20:51:55.868632005 +0000 UTC m=+5583.659297947" observedRunningTime="2026-02-19 20:51:57.306511333 +0000 UTC m=+5585.097177275" watchObservedRunningTime="2026-02-19 20:51:57.313968755 +0000 UTC m=+5585.104634697" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.270632 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-jldsh"] Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.273819 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.425358 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8872b5f8-f242-479c-bbaf-eba5b6bae045-host\") pod \"crc-debug-jldsh\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.425438 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9tb7\" (UniqueName: \"kubernetes.io/projected/8872b5f8-f242-479c-bbaf-eba5b6bae045-kube-api-access-n9tb7\") pod \"crc-debug-jldsh\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.527256 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8872b5f8-f242-479c-bbaf-eba5b6bae045-host\") pod \"crc-debug-jldsh\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.527723 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9tb7\" (UniqueName: \"kubernetes.io/projected/8872b5f8-f242-479c-bbaf-eba5b6bae045-kube-api-access-n9tb7\") pod \"crc-debug-jldsh\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.531892 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8872b5f8-f242-479c-bbaf-eba5b6bae045-host\") pod \"crc-debug-jldsh\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.554466 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9tb7\" (UniqueName: \"kubernetes.io/projected/8872b5f8-f242-479c-bbaf-eba5b6bae045-kube-api-access-n9tb7\") pod \"crc-debug-jldsh\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.594499 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:52:02 crc kubenswrapper[4787]: I0219 20:52:02.645577 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:52:03 crc kubenswrapper[4787]: I0219 20:52:03.365591 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" event={"ID":"8872b5f8-f242-479c-bbaf-eba5b6bae045","Type":"ContainerStarted","Data":"69d62b53f448caa4ea90a58b126b8327bda35ef6cd9b003b0c04fcad48b547c6"} Feb 19 20:52:15 crc kubenswrapper[4787]: I0219 20:52:15.505022 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" event={"ID":"8872b5f8-f242-479c-bbaf-eba5b6bae045","Type":"ContainerStarted","Data":"924884b1f11b7b06df78b51158cedc1bb33b801c9bdeaa20bf71be93b6159d98"} Feb 19 20:52:15 crc kubenswrapper[4787]: I0219 20:52:15.528524 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" podStartSLOduration=1.046853988 podStartE2EDuration="13.528495846s" podCreationTimestamp="2026-02-19 20:52:02 +0000 UTC" firstStartedPulling="2026-02-19 20:52:02.645233191 +0000 UTC m=+5590.435899133" lastFinishedPulling="2026-02-19 20:52:15.126875049 +0000 UTC m=+5602.917540991" observedRunningTime="2026-02-19 20:52:15.5247752 +0000 UTC m=+5603.315441152" watchObservedRunningTime="2026-02-19 20:52:15.528495846 +0000 UTC m=+5603.319161798" Feb 19 20:52:39 crc kubenswrapper[4787]: I0219 20:52:39.263818 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:52:39 crc kubenswrapper[4787]: I0219 20:52:39.264365 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:53:09 crc kubenswrapper[4787]: I0219 20:53:09.099810 4787 generic.go:334] "Generic (PLEG): container finished" podID="8872b5f8-f242-479c-bbaf-eba5b6bae045" containerID="924884b1f11b7b06df78b51158cedc1bb33b801c9bdeaa20bf71be93b6159d98" exitCode=0 Feb 19 20:53:09 crc kubenswrapper[4787]: I0219 20:53:09.099901 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" event={"ID":"8872b5f8-f242-479c-bbaf-eba5b6bae045","Type":"ContainerDied","Data":"924884b1f11b7b06df78b51158cedc1bb33b801c9bdeaa20bf71be93b6159d98"} Feb 19 20:53:09 crc kubenswrapper[4787]: I0219 20:53:09.263030 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:53:09 crc kubenswrapper[4787]: I0219 20:53:09.263492 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.259310 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.314726 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-jldsh"] Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.332506 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-jldsh"] Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.370852 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8872b5f8-f242-479c-bbaf-eba5b6bae045-host\") pod \"8872b5f8-f242-479c-bbaf-eba5b6bae045\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.371154 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9tb7\" (UniqueName: \"kubernetes.io/projected/8872b5f8-f242-479c-bbaf-eba5b6bae045-kube-api-access-n9tb7\") pod \"8872b5f8-f242-479c-bbaf-eba5b6bae045\" (UID: \"8872b5f8-f242-479c-bbaf-eba5b6bae045\") " Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.372292 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8872b5f8-f242-479c-bbaf-eba5b6bae045-host" (OuterVolumeSpecName: "host") pod "8872b5f8-f242-479c-bbaf-eba5b6bae045" (UID: "8872b5f8-f242-479c-bbaf-eba5b6bae045"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.392143 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8872b5f8-f242-479c-bbaf-eba5b6bae045-kube-api-access-n9tb7" (OuterVolumeSpecName: "kube-api-access-n9tb7") pod "8872b5f8-f242-479c-bbaf-eba5b6bae045" (UID: "8872b5f8-f242-479c-bbaf-eba5b6bae045"). InnerVolumeSpecName "kube-api-access-n9tb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.476453 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9tb7\" (UniqueName: \"kubernetes.io/projected/8872b5f8-f242-479c-bbaf-eba5b6bae045-kube-api-access-n9tb7\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.476489 4787 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8872b5f8-f242-479c-bbaf-eba5b6bae045-host\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:10 crc kubenswrapper[4787]: I0219 20:53:10.905043 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8872b5f8-f242-479c-bbaf-eba5b6bae045" path="/var/lib/kubelet/pods/8872b5f8-f242-479c-bbaf-eba5b6bae045/volumes" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.132089 4787 scope.go:117] "RemoveContainer" containerID="924884b1f11b7b06df78b51158cedc1bb33b801c9bdeaa20bf71be93b6159d98" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.132191 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-jldsh" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.516860 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-vp77n"] Feb 19 20:53:11 crc kubenswrapper[4787]: E0219 20:53:11.518265 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8872b5f8-f242-479c-bbaf-eba5b6bae045" containerName="container-00" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.518356 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8872b5f8-f242-479c-bbaf-eba5b6bae045" containerName="container-00" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.518709 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8872b5f8-f242-479c-bbaf-eba5b6bae045" containerName="container-00" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.519628 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.702107 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cda480e-0b91-4989-94fd-d8e987e1a573-host\") pod \"crc-debug-vp77n\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.702221 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dpn\" (UniqueName: \"kubernetes.io/projected/1cda480e-0b91-4989-94fd-d8e987e1a573-kube-api-access-g9dpn\") pod \"crc-debug-vp77n\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.804154 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dpn\" (UniqueName: \"kubernetes.io/projected/1cda480e-0b91-4989-94fd-d8e987e1a573-kube-api-access-g9dpn\") pod \"crc-debug-vp77n\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.804382 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cda480e-0b91-4989-94fd-d8e987e1a573-host\") pod \"crc-debug-vp77n\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.804539 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cda480e-0b91-4989-94fd-d8e987e1a573-host\") pod \"crc-debug-vp77n\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:11 crc kubenswrapper[4787]: I0219 20:53:11.942382 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dpn\" (UniqueName: \"kubernetes.io/projected/1cda480e-0b91-4989-94fd-d8e987e1a573-kube-api-access-g9dpn\") pod \"crc-debug-vp77n\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:12 crc kubenswrapper[4787]: I0219 20:53:12.140975 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:13 crc kubenswrapper[4787]: I0219 20:53:13.162452 4787 generic.go:334] "Generic (PLEG): container finished" podID="1cda480e-0b91-4989-94fd-d8e987e1a573" containerID="5ce3d3fc2422e19f6ea0d8949a9aa613f3f54c6a2e06ff3cf84e08e3adba2dde" exitCode=0 Feb 19 20:53:13 crc kubenswrapper[4787]: I0219 20:53:13.162527 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" event={"ID":"1cda480e-0b91-4989-94fd-d8e987e1a573","Type":"ContainerDied","Data":"5ce3d3fc2422e19f6ea0d8949a9aa613f3f54c6a2e06ff3cf84e08e3adba2dde"} Feb 19 20:53:13 crc kubenswrapper[4787]: I0219 20:53:13.162893 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" event={"ID":"1cda480e-0b91-4989-94fd-d8e987e1a573","Type":"ContainerStarted","Data":"26ac0b3fe93e2aa11ee9a11b88e1e1609ff92fd00d259b93418aea080141029d"} Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.312370 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.471242 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cda480e-0b91-4989-94fd-d8e987e1a573-host\") pod \"1cda480e-0b91-4989-94fd-d8e987e1a573\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.471392 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cda480e-0b91-4989-94fd-d8e987e1a573-host" (OuterVolumeSpecName: "host") pod "1cda480e-0b91-4989-94fd-d8e987e1a573" (UID: "1cda480e-0b91-4989-94fd-d8e987e1a573"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.471567 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9dpn\" (UniqueName: \"kubernetes.io/projected/1cda480e-0b91-4989-94fd-d8e987e1a573-kube-api-access-g9dpn\") pod \"1cda480e-0b91-4989-94fd-d8e987e1a573\" (UID: \"1cda480e-0b91-4989-94fd-d8e987e1a573\") " Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.472183 4787 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1cda480e-0b91-4989-94fd-d8e987e1a573-host\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.495364 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cda480e-0b91-4989-94fd-d8e987e1a573-kube-api-access-g9dpn" (OuterVolumeSpecName: "kube-api-access-g9dpn") pod "1cda480e-0b91-4989-94fd-d8e987e1a573" (UID: "1cda480e-0b91-4989-94fd-d8e987e1a573"). InnerVolumeSpecName "kube-api-access-g9dpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:53:14 crc kubenswrapper[4787]: I0219 20:53:14.574971 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9dpn\" (UniqueName: \"kubernetes.io/projected/1cda480e-0b91-4989-94fd-d8e987e1a573-kube-api-access-g9dpn\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:15 crc kubenswrapper[4787]: I0219 20:53:15.202343 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" event={"ID":"1cda480e-0b91-4989-94fd-d8e987e1a573","Type":"ContainerDied","Data":"26ac0b3fe93e2aa11ee9a11b88e1e1609ff92fd00d259b93418aea080141029d"} Feb 19 20:53:15 crc kubenswrapper[4787]: I0219 20:53:15.202386 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26ac0b3fe93e2aa11ee9a11b88e1e1609ff92fd00d259b93418aea080141029d" Feb 19 20:53:15 crc kubenswrapper[4787]: I0219 20:53:15.202441 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-vp77n" Feb 19 20:53:15 crc kubenswrapper[4787]: I0219 20:53:15.349679 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-vp77n"] Feb 19 20:53:15 crc kubenswrapper[4787]: I0219 20:53:15.358646 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-vp77n"] Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.681584 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-xwg89"] Feb 19 20:53:16 crc kubenswrapper[4787]: E0219 20:53:16.683429 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cda480e-0b91-4989-94fd-d8e987e1a573" containerName="container-00" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.683515 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cda480e-0b91-4989-94fd-d8e987e1a573" containerName="container-00" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.683832 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cda480e-0b91-4989-94fd-d8e987e1a573" containerName="container-00" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.684733 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.832124 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mcb\" (UniqueName: \"kubernetes.io/projected/76c1776e-4925-4f16-97a7-c175efe21dd5-kube-api-access-h9mcb\") pod \"crc-debug-xwg89\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.832274 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76c1776e-4925-4f16-97a7-c175efe21dd5-host\") pod \"crc-debug-xwg89\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.911254 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cda480e-0b91-4989-94fd-d8e987e1a573" path="/var/lib/kubelet/pods/1cda480e-0b91-4989-94fd-d8e987e1a573/volumes" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.934641 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76c1776e-4925-4f16-97a7-c175efe21dd5-host\") pod \"crc-debug-xwg89\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.934814 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76c1776e-4925-4f16-97a7-c175efe21dd5-host\") pod \"crc-debug-xwg89\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.934920 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mcb\" (UniqueName: \"kubernetes.io/projected/76c1776e-4925-4f16-97a7-c175efe21dd5-kube-api-access-h9mcb\") pod \"crc-debug-xwg89\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:16 crc kubenswrapper[4787]: I0219 20:53:16.968694 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mcb\" (UniqueName: \"kubernetes.io/projected/76c1776e-4925-4f16-97a7-c175efe21dd5-kube-api-access-h9mcb\") pod \"crc-debug-xwg89\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:17 crc kubenswrapper[4787]: I0219 20:53:17.009225 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:17 crc kubenswrapper[4787]: W0219 20:53:17.039826 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c1776e_4925_4f16_97a7_c175efe21dd5.slice/crio-b42f1886a4e11f4a7f733791a8bb9e70cb500a0946506955ccf779183b8689bc WatchSource:0}: Error finding container b42f1886a4e11f4a7f733791a8bb9e70cb500a0946506955ccf779183b8689bc: Status 404 returned error can't find the container with id b42f1886a4e11f4a7f733791a8bb9e70cb500a0946506955ccf779183b8689bc Feb 19 20:53:17 crc kubenswrapper[4787]: I0219 20:53:17.224802 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-xwg89" event={"ID":"76c1776e-4925-4f16-97a7-c175efe21dd5","Type":"ContainerStarted","Data":"b42f1886a4e11f4a7f733791a8bb9e70cb500a0946506955ccf779183b8689bc"} Feb 19 20:53:18 crc kubenswrapper[4787]: I0219 20:53:18.235743 4787 generic.go:334] "Generic (PLEG): container finished" podID="76c1776e-4925-4f16-97a7-c175efe21dd5" containerID="1cf23b8babdecb8b3f6612b175273590bb646da2fa74f1900d88c2e84c115bff" exitCode=0 Feb 19 20:53:18 crc kubenswrapper[4787]: I0219 20:53:18.235847 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/crc-debug-xwg89" event={"ID":"76c1776e-4925-4f16-97a7-c175efe21dd5","Type":"ContainerDied","Data":"1cf23b8babdecb8b3f6612b175273590bb646da2fa74f1900d88c2e84c115bff"} Feb 19 20:53:18 crc kubenswrapper[4787]: I0219 20:53:18.284211 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-xwg89"] Feb 19 20:53:18 crc kubenswrapper[4787]: I0219 20:53:18.300373 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7mnxb/crc-debug-xwg89"] Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.368837 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.491269 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9mcb\" (UniqueName: \"kubernetes.io/projected/76c1776e-4925-4f16-97a7-c175efe21dd5-kube-api-access-h9mcb\") pod \"76c1776e-4925-4f16-97a7-c175efe21dd5\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.491336 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76c1776e-4925-4f16-97a7-c175efe21dd5-host\") pod \"76c1776e-4925-4f16-97a7-c175efe21dd5\" (UID: \"76c1776e-4925-4f16-97a7-c175efe21dd5\") " Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.491487 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76c1776e-4925-4f16-97a7-c175efe21dd5-host" (OuterVolumeSpecName: "host") pod "76c1776e-4925-4f16-97a7-c175efe21dd5" (UID: "76c1776e-4925-4f16-97a7-c175efe21dd5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.491976 4787 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76c1776e-4925-4f16-97a7-c175efe21dd5-host\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.499813 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c1776e-4925-4f16-97a7-c175efe21dd5-kube-api-access-h9mcb" (OuterVolumeSpecName: "kube-api-access-h9mcb") pod "76c1776e-4925-4f16-97a7-c175efe21dd5" (UID: "76c1776e-4925-4f16-97a7-c175efe21dd5"). InnerVolumeSpecName "kube-api-access-h9mcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:53:19 crc kubenswrapper[4787]: I0219 20:53:19.594653 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9mcb\" (UniqueName: \"kubernetes.io/projected/76c1776e-4925-4f16-97a7-c175efe21dd5-kube-api-access-h9mcb\") on node \"crc\" DevicePath \"\"" Feb 19 20:53:20 crc kubenswrapper[4787]: I0219 20:53:20.258640 4787 scope.go:117] "RemoveContainer" containerID="1cf23b8babdecb8b3f6612b175273590bb646da2fa74f1900d88c2e84c115bff" Feb 19 20:53:20 crc kubenswrapper[4787]: I0219 20:53:20.258894 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/crc-debug-xwg89" Feb 19 20:53:20 crc kubenswrapper[4787]: I0219 20:53:20.908149 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c1776e-4925-4f16-97a7-c175efe21dd5" path="/var/lib/kubelet/pods/76c1776e-4925-4f16-97a7-c175efe21dd5/volumes" Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.263638 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.264139 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.264186 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.264852 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.264906 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" gracePeriod=600 Feb 19 20:53:39 crc kubenswrapper[4787]: E0219 20:53:39.410512 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.465411 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" exitCode=0 Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.465454 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364"} Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.465495 4787 scope.go:117] "RemoveContainer" containerID="55355501af7cce90cd3e9d997bd7966a0e7d3f6b53e7334e1a4f61c4379fa62d" Feb 19 20:53:39 crc kubenswrapper[4787]: I0219 20:53:39.467094 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:53:39 crc kubenswrapper[4787]: E0219 20:53:39.467715 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:53:51 crc kubenswrapper[4787]: I0219 20:53:51.892596 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:53:51 crc kubenswrapper[4787]: E0219 20:53:51.893519 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:53:52 crc kubenswrapper[4787]: I0219 20:53:52.519856 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d546d784-5037-48eb-96dd-e48dfe765bd4/aodh-api/0.log" Feb 19 20:53:52 crc kubenswrapper[4787]: I0219 20:53:52.653424 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d546d784-5037-48eb-96dd-e48dfe765bd4/aodh-evaluator/0.log" Feb 19 20:53:52 crc kubenswrapper[4787]: I0219 20:53:52.701876 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d546d784-5037-48eb-96dd-e48dfe765bd4/aodh-listener/0.log" Feb 19 20:53:52 crc kubenswrapper[4787]: I0219 20:53:52.721678 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_d546d784-5037-48eb-96dd-e48dfe765bd4/aodh-notifier/0.log" Feb 19 20:53:52 crc kubenswrapper[4787]: I0219 20:53:52.836635 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5d48c87bfd-7pzgt_f0c827ac-57d9-48c2-adc1-4358fd81e5b1/barbican-api/0.log" Feb 19 20:53:52 crc kubenswrapper[4787]: I0219 20:53:52.955790 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5d48c87bfd-7pzgt_f0c827ac-57d9-48c2-adc1-4358fd81e5b1/barbican-api-log/0.log" Feb 19 20:53:53 crc kubenswrapper[4787]: I0219 20:53:53.098956 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67b499844-xlhd9_2d708d6b-a0b3-4add-8c98-0a33b73965ed/barbican-keystone-listener/0.log" Feb 19 20:53:53 crc kubenswrapper[4787]: I0219 20:53:53.184044 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67b499844-xlhd9_2d708d6b-a0b3-4add-8c98-0a33b73965ed/barbican-keystone-listener-log/0.log" Feb 19 20:53:53 crc kubenswrapper[4787]: I0219 20:53:53.204907 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-755bd8f67-9b2kl_31450605-eeb8-465d-924d-509a32d908ea/barbican-worker/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.034401 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-755bd8f67-9b2kl_31450605-eeb8-465d-924d-509a32d908ea/barbican-worker-log/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.061516 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-4zll6_b03f276c-b1cd-46aa-ac07-69221b9d6684/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.292397 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_182e6de6-87eb-490b-a614-82c6063752f9/ceilometer-notification-agent/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.346271 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_182e6de6-87eb-490b-a614-82c6063752f9/ceilometer-central-agent/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.358427 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_182e6de6-87eb-490b-a614-82c6063752f9/proxy-httpd/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.393778 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_182e6de6-87eb-490b-a614-82c6063752f9/sg-core/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.554350 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4d841c01-9ebe-4b54-b1c4-a8636ba01db1/cinder-api-log/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.640400 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4d841c01-9ebe-4b54-b1c4-a8636ba01db1/cinder-api/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.777929 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3f4e423c-1e8b-47e3-af08-1190ee8942aa/cinder-scheduler/0.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.872878 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3f4e423c-1e8b-47e3-af08-1190ee8942aa/cinder-scheduler/1.log" Feb 19 20:53:54 crc kubenswrapper[4787]: I0219 20:53:54.978342 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3f4e423c-1e8b-47e3-af08-1190ee8942aa/probe/0.log" Feb 19 20:53:55 crc kubenswrapper[4787]: I0219 20:53:55.027411 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-sv5sw_718c9653-d673-4f8b-bf7f-43d983bd9854/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:55 crc kubenswrapper[4787]: I0219 20:53:55.188449 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-74klb_c51c1e25-0f98-4bc4-ad23-c5123e535d97/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:55 crc kubenswrapper[4787]: I0219 20:53:55.999773 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f88wr_e7617c67-0e97-4496-abe7-8b5ab1db282d/init/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.183627 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f88wr_e7617c67-0e97-4496-abe7-8b5ab1db282d/init/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.253031 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-f88wr_e7617c67-0e97-4496-abe7-8b5ab1db282d/dnsmasq-dns/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.306590 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xqjw2_7cd53a77-cdee-4ab7-b6ae-0a570d7cd9ca/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.500411 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e1a3964b-540d-4b05-a2ad-b39a87d44a3a/glance-httpd/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.537697 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e1a3964b-540d-4b05-a2ad-b39a87d44a3a/glance-log/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.692784 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6a841286-0da4-4bd8-96d3-d7cb751bbafb/glance-httpd/0.log" Feb 19 20:53:56 crc kubenswrapper[4787]: I0219 20:53:56.724247 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6a841286-0da4-4bd8-96d3-d7cb751bbafb/glance-log/0.log" Feb 19 20:53:57 crc kubenswrapper[4787]: I0219 20:53:57.513798 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5b6df797bd-hbhzc_70991400-5d96-44fe-934f-c866defe8adb/heat-api/0.log" Feb 19 20:53:57 crc kubenswrapper[4787]: I0219 20:53:57.514369 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6bd6f6c8df-52ltw_ee9c5ff1-8d4f-4bd2-b00c-4679c695e557/heat-engine/0.log" Feb 19 20:53:57 crc kubenswrapper[4787]: I0219 20:53:57.529865 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-b6lp2_7dbede29-862e-457a-a641-689836eab084/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:57 crc kubenswrapper[4787]: I0219 20:53:57.582841 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-68df46bdff-kbz99_8467aa04-3865-45da-8f9f-98011798d7d6/heat-cfnapi/0.log" Feb 19 20:53:57 crc kubenswrapper[4787]: I0219 20:53:57.775600 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-r2hlh_5afed044-bccc-4d1f-9b24-ffbc4ebecb65/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:58 crc kubenswrapper[4787]: I0219 20:53:58.078316 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e7a69404-5a15-40e5-bd22-faa4493739fa/kube-state-metrics/0.log" Feb 19 20:53:58 crc kubenswrapper[4787]: I0219 20:53:58.150549 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29525521-pkssb_468d7d08-0fac-4e00-a1c0-c244a2b39aee/keystone-cron/0.log" Feb 19 20:53:58 crc kubenswrapper[4787]: I0219 20:53:58.389161 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-l8q26_d51bd7f7-9324-441b-b8f4-1ebde86c404f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:58 crc kubenswrapper[4787]: I0219 20:53:58.453419 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-99zzw_0e791926-1bff-4cce-9d66-994a91623a18/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:53:58 crc kubenswrapper[4787]: I0219 20:53:58.872252 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_be25b512-f3a6-4bdc-81a1-a4bcc1ef237e/mysqld-exporter/0.log" Feb 19 20:53:59 crc kubenswrapper[4787]: I0219 20:53:59.268005 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c797bbccc-wln47_9239fd31-7ea3-445a-bee3-b3c1a45f58cf/neutron-httpd/0.log" Feb 19 20:53:59 crc kubenswrapper[4787]: I0219 20:53:59.346124 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c797bbccc-wln47_9239fd31-7ea3-445a-bee3-b3c1a45f58cf/neutron-api/0.log" Feb 19 20:53:59 crc kubenswrapper[4787]: I0219 20:53:59.477021 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-s8x4b_dd1a533e-0d5d-4b5d-816d-81fb1bf769be/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:00 crc kubenswrapper[4787]: I0219 20:54:00.143391 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d3590400-4951-4e45-b479-bc2d31b92a57/memcached/0.log" Feb 19 20:54:00 crc kubenswrapper[4787]: I0219 20:54:00.279195 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_64c53e31-72f6-4357-a3b0-dabedc9b834e/nova-cell0-conductor-conductor/0.log" Feb 19 20:54:00 crc kubenswrapper[4787]: I0219 20:54:00.509459 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a898d73d-04ec-4e21-bd5a-99e453f36d8e/nova-api-log/0.log" Feb 19 20:54:00 crc kubenswrapper[4787]: I0219 20:54:00.720427 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7a5f261f-6f8a-4da9-bf47-adcdcd3fbd6b/nova-cell1-conductor-conductor/0.log" Feb 19 20:54:00 crc kubenswrapper[4787]: I0219 20:54:00.872085 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ed3a53f5-c0d3-4b73-8af5-b28a227d3859/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 20:54:00 crc kubenswrapper[4787]: I0219 20:54:00.902852 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a898d73d-04ec-4e21-bd5a-99e453f36d8e/nova-api-api/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.048668 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-frmbq_044bb1cd-0401-4d14-9fbe-10160ee01243/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.200425 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7a497801-b864-4180-bcc6-d6b7f3c5b35e/nova-metadata-log/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.567495 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_32df70fc-d0d1-42ed-b37f-3ba71192a187/nova-scheduler-scheduler/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.587038 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fec6d8b2-4d43-4053-8028-747e6d28f7c4/mysql-bootstrap/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.851215 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fec6d8b2-4d43-4053-8028-747e6d28f7c4/mysql-bootstrap/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.857790 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fec6d8b2-4d43-4053-8028-747e6d28f7c4/galera/0.log" Feb 19 20:54:01 crc kubenswrapper[4787]: I0219 20:54:01.886034 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fec6d8b2-4d43-4053-8028-747e6d28f7c4/galera/1.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.019023 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f8c6bbf7c-8r6fm_c95c4e6a-386d-49a9-a8ff-0ea1fdc47ecf/keystone-api/0.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.135228 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5848c368-e71c-439d-bfca-f241813f9136/mysql-bootstrap/0.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.387335 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5848c368-e71c-439d-bfca-f241813f9136/mysql-bootstrap/0.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.438734 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5848c368-e71c-439d-bfca-f241813f9136/galera/0.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.444043 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5848c368-e71c-439d-bfca-f241813f9136/galera/1.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.686845 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-j5ggf_f71fcf51-23db-4068-9690-1624d25948cb/openstack-network-exporter/0.log" Feb 19 20:54:02 crc kubenswrapper[4787]: I0219 20:54:02.721764 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9a61c722-e803-4b6c-9127-e4929553f802/openstackclient/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.073001 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7a497801-b864-4180-bcc6-d6b7f3c5b35e/nova-metadata-metadata/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.574419 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n4c8f_19e04b88-069d-4c44-9511-ed765c0424ae/ovn-controller/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.575448 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x42pw_f94e933a-d230-409b-99ff-f47cf13a9638/ovsdb-server-init/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.839499 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x42pw_f94e933a-d230-409b-99ff-f47cf13a9638/ovs-vswitchd/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.852566 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x42pw_f94e933a-d230-409b-99ff-f47cf13a9638/ovsdb-server/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.898475 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fz5rx_0a8fa017-7d8f-49d2-bd24-2a65b206e279/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:03 crc kubenswrapper[4787]: I0219 20:54:03.918935 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x42pw_f94e933a-d230-409b-99ff-f47cf13a9638/ovsdb-server-init/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.054413 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_765d10d6-7924-425b-b553-6e921dc89049/openstack-network-exporter/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.102367 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_765d10d6-7924-425b-b553-6e921dc89049/ovn-northd/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.179754 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2591ddd4-424a-4be8-ad04-62ad3e0a82a6/openstack-network-exporter/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.301669 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_20cd5d9e-251b-4f3f-9402-b19a7676c9a5/openstack-network-exporter/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.342381 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2591ddd4-424a-4be8-ad04-62ad3e0a82a6/ovsdbserver-nb/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.369836 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_20cd5d9e-251b-4f3f-9402-b19a7676c9a5/ovsdbserver-sb/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.592834 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f76547854-2wh5m_05b1456e-ecae-4668-a7c5-4aea5446af5f/placement-api/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.662716 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f76547854-2wh5m_05b1456e-ecae-4668-a7c5-4aea5446af5f/placement-log/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.667469 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_659dcc4f-0134-40f4-a6ee-150bb5dee79b/init-config-reloader/0.log" Feb 19 20:54:04 crc kubenswrapper[4787]: I0219 20:54:04.898824 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:54:04 crc kubenswrapper[4787]: E0219 20:54:04.899373 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.034432 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_659dcc4f-0134-40f4-a6ee-150bb5dee79b/init-config-reloader/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.056145 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_659dcc4f-0134-40f4-a6ee-150bb5dee79b/config-reloader/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.518321 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_659dcc4f-0134-40f4-a6ee-150bb5dee79b/thanos-sidecar/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.527515 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_659dcc4f-0134-40f4-a6ee-150bb5dee79b/prometheus/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.616031 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_eebe8011-08bc-437a-89d5-f7aecaedceb5/setup-container/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.780398 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_eebe8011-08bc-437a-89d5-f7aecaedceb5/setup-container/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.824433 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_eebe8011-08bc-437a-89d5-f7aecaedceb5/rabbitmq/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.850031 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3/setup-container/0.log" Feb 19 20:54:05 crc kubenswrapper[4787]: I0219 20:54:05.985290 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3/setup-container/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.037500 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d2a49f4-a29e-46b0-86fe-6bcb8e1b5cf3/rabbitmq/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.044227 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_eb22ceb0-4965-4d12-9950-93feeb6876e9/setup-container/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.247402 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef/setup-container/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.296790 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_eb22ceb0-4965-4d12-9950-93feeb6876e9/setup-container/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.335942 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_eb22ceb0-4965-4d12-9950-93feeb6876e9/rabbitmq/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.520299 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-c8dq6_0ba5d253-5278-4b6c-a071-cec1f3824dd4/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.534397 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef/setup-container/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.579439 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_8ec5dd07-5c2c-49c2-b2d2-24ea5adbdeef/rabbitmq/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.785068 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mw8x5_290f63d5-112d-49e1-ade3-47e3a699dee7/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.836944 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-srwrx_688fc5c8-6b45-40de-9e80-8b90ccc5ca16/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:06 crc kubenswrapper[4787]: I0219 20:54:06.838346 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-vzrk2_17844a3d-7feb-457b-8e01-f38398e34b63/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.016501 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-vhp6n_9b584219-d3d0-490f-bba5-dff958e63e6a/ssh-known-hosts-edpm-deployment/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.122479 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7dc655b5f9-gwjrd_d0374190-eb11-435c-af6f-abd31845a33e/proxy-server/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.221461 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7dc655b5f9-gwjrd_d0374190-eb11-435c-af6f-abd31845a33e/proxy-httpd/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.295081 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-bk7pj_f48cb9d5-9e69-4553-b61e-e0bde367ffc7/swift-ring-rebalance/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.338500 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/account-auditor/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.406168 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/account-reaper/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.493905 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/account-server/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.499907 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/account-replicator/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.546509 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/container-auditor/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.604282 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/container-replicator/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.649866 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/container-server/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.721939 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/object-expirer/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.730152 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/container-updater/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.750585 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/object-auditor/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.827133 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/object-replicator/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.878661 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/object-server/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.951187 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/object-updater/0.log" Feb 19 20:54:07 crc kubenswrapper[4787]: I0219 20:54:07.958473 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/rsync/0.log" Feb 19 20:54:08 crc kubenswrapper[4787]: I0219 20:54:08.007251 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3109e7cb-bd74-40d5-a2ab-deb7a9794d44/swift-recon-cron/0.log" Feb 19 20:54:08 crc kubenswrapper[4787]: I0219 20:54:08.134017 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-42v9n_493db6ac-9a60-4a9a-9cea-e99c4580569e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:08 crc kubenswrapper[4787]: I0219 20:54:08.227893 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-dvdfp_c8b17184-10d3-4bed-a849-e9b38351d827/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:08 crc kubenswrapper[4787]: I0219 20:54:08.521680 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f3ccf001-92e7-4c60-a1cb-1ce4fb64c99b/test-operator-logs-container/0.log" Feb 19 20:54:08 crc kubenswrapper[4787]: I0219 20:54:08.635716 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nvdkk_3b90f36e-f50d-4430-8596-b157390ed5c7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 20:54:09 crc kubenswrapper[4787]: I0219 20:54:09.021981 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3cd05e88-76fc-4a10-bc71-426177032c9f/tempest-tests-tempest-tests-runner/0.log" Feb 19 20:54:15 crc kubenswrapper[4787]: I0219 20:54:15.892538 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:54:15 crc kubenswrapper[4787]: E0219 20:54:15.894949 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:54:27 crc kubenswrapper[4787]: I0219 20:54:27.892144 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:54:27 crc kubenswrapper[4787]: E0219 20:54:27.893075 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.427628 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/util/0.log" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.577876 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/util/0.log" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.583138 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/pull/0.log" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.651427 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/pull/0.log" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.784900 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/pull/0.log" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.807322 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/extract/0.log" Feb 19 20:54:35 crc kubenswrapper[4787]: I0219 20:54:35.817863 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7e3fc03930efb6efd6702b045a302a5be09281bd8dcbbea481d1da8f77c52kh_f65fb8fa-29f2-4ef8-9d84-221f8e7c354a/util/0.log" Feb 19 20:54:36 crc kubenswrapper[4787]: I0219 20:54:36.475354 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-hdncj_39e4daf9-e2ed-4325-9f5f-27b2b5662945/manager/0.log" Feb 19 20:54:37 crc kubenswrapper[4787]: I0219 20:54:37.188281 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-tlx7r_f58c3336-8153-4c54-95c6-2cf2f23cbe57/manager/0.log" Feb 19 20:54:37 crc kubenswrapper[4787]: I0219 20:54:37.610829 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-zvvkw_0fdbbc7b-81f4-401b-8df0-59417ab3ec18/manager/1.log" Feb 19 20:54:37 crc kubenswrapper[4787]: I0219 20:54:37.685402 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-w557k_9cdc475f-0036-4e63-8fd4-c1e44537668d/manager/1.log" Feb 19 20:54:37 crc kubenswrapper[4787]: I0219 20:54:37.981669 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-w557k_9cdc475f-0036-4e63-8fd4-c1e44537668d/manager/0.log" Feb 19 20:54:38 crc kubenswrapper[4787]: I0219 20:54:38.193533 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-5k7fl_68b08cc9-812d-4199-8654-9a5a3f2a855f/manager/0.log" Feb 19 20:54:38 crc kubenswrapper[4787]: I0219 20:54:38.706482 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5g7hg_6e92b566-c5a6-40e8-be75-5de416385888/manager/1.log" Feb 19 20:54:38 crc kubenswrapper[4787]: I0219 20:54:38.927497 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-5g7hg_6e92b566-c5a6-40e8-be75-5de416385888/manager/0.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.093097 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-j59h4_06aa3b20-a2ee-4c2b-bda6-0e876910a26c/manager/0.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.172362 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-zvvkw_0fdbbc7b-81f4-401b-8df0-59417ab3ec18/manager/0.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.331094 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-66hj2_4a47b4c3-d7f4-4194-bd9c-fdef06d3450d/manager/1.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.554267 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-66hj2_4a47b4c3-d7f4-4194-bd9c-fdef06d3450d/manager/0.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.653700 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-77xc6_46df12dd-6fd4-4508-8141-ef1cc6551d79/manager/1.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.674180 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-77xc6_46df12dd-6fd4-4508-8141-ef1cc6551d79/manager/0.log" Feb 19 20:54:39 crc kubenswrapper[4787]: I0219 20:54:39.885161 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-bbnhv_a752c75e-1e1e-4d78-b82a-95f8df84523f/manager/1.log" Feb 19 20:54:40 crc kubenswrapper[4787]: I0219 20:54:40.008361 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-bbnhv_a752c75e-1e1e-4d78-b82a-95f8df84523f/manager/0.log" Feb 19 20:54:40 crc kubenswrapper[4787]: I0219 20:54:40.060871 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-9p6x4_00ef1a7b-bf28-4126-b60f-c79af3fde4da/manager/1.log" Feb 19 20:54:40 crc kubenswrapper[4787]: I0219 20:54:40.148367 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-9p6x4_00ef1a7b-bf28-4126-b60f-c79af3fde4da/manager/0.log" Feb 19 20:54:41 crc kubenswrapper[4787]: I0219 20:54:41.145922 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-whv4k_285a6f28-aeac-4b0d-816a-2eb05abe7ef3/manager/0.log" Feb 19 20:54:41 crc kubenswrapper[4787]: I0219 20:54:41.335113 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc_78a45da3-619d-4cc4-a819-6dad66a61737/manager/1.log" Feb 19 20:54:41 crc kubenswrapper[4787]: I0219 20:54:41.422511 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cx6gqc_78a45da3-619d-4cc4-a819-6dad66a61737/manager/0.log" Feb 19 20:54:41 crc kubenswrapper[4787]: I0219 20:54:41.858359 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b5dd774c6-bjggj_5273ac77-af0e-4a20-aa52-708ac057cfdc/operator/0.log" Feb 19 20:54:41 crc kubenswrapper[4787]: I0219 20:54:41.891541 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:54:41 crc kubenswrapper[4787]: E0219 20:54:41.891833 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:54:42 crc kubenswrapper[4787]: I0219 20:54:42.740753 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-88xzz_26a6b075-ab07-4508-86f7-2af4934e078a/manager/1.log" Feb 19 20:54:42 crc kubenswrapper[4787]: I0219 20:54:42.991946 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zrns9_de424d05-0977-4dac-8bd9-01c37cf49d4e/registry-server/0.log" Feb 19 20:54:43 crc kubenswrapper[4787]: I0219 20:54:43.220592 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-j2ktf_880dd943-ce91-4373-ab8a-fd5df0a44e2a/manager/1.log" Feb 19 20:54:43 crc kubenswrapper[4787]: I0219 20:54:43.275071 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-j2ktf_880dd943-ce91-4373-ab8a-fd5df0a44e2a/manager/0.log" Feb 19 20:54:43 crc kubenswrapper[4787]: I0219 20:54:43.586103 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-rv4sk_c0ee76ae-6d9e-4470-8f77-27d7d231bb7d/manager/0.log" Feb 19 20:54:44 crc kubenswrapper[4787]: I0219 20:54:44.266492 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-phlkl_250d2041-efa0-41fb-8b0e-e99ba8c1c14c/operator/0.log" Feb 19 20:54:44 crc kubenswrapper[4787]: I0219 20:54:44.484131 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-q46b7_0944c0f9-ef54-46cc-be37-a59477312705/manager/1.log" Feb 19 20:54:44 crc kubenswrapper[4787]: I0219 20:54:44.487836 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-q46b7_0944c0f9-ef54-46cc-be37-a59477312705/manager/0.log" Feb 19 20:54:44 crc kubenswrapper[4787]: I0219 20:54:44.950150 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-88xzz_26a6b075-ab07-4508-86f7-2af4934e078a/manager/0.log" Feb 19 20:54:45 crc kubenswrapper[4787]: I0219 20:54:45.099064 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-x8ltf_c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4/manager/1.log" Feb 19 20:54:45 crc kubenswrapper[4787]: I0219 20:54:45.181976 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-x8ltf_c3e6fa36-c4ad-47d6-9ccb-9fc66b1038a4/manager/0.log" Feb 19 20:54:45 crc kubenswrapper[4787]: I0219 20:54:45.344756 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58b878c868-9zgl9_90a8e9a7-b3db-4b64-bde8-569c3e8485d5/manager/0.log" Feb 19 20:54:45 crc kubenswrapper[4787]: I0219 20:54:45.379858 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b9547c8c7-s8zzh_36c1d908-879d-4d98-bd71-06b5c6e802e8/manager/0.log" Feb 19 20:54:45 crc kubenswrapper[4787]: I0219 20:54:45.419574 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-s78xt_bfb600b1-766e-4df0-9f20-a5b4ad0ed684/manager/0.log" Feb 19 20:54:50 crc kubenswrapper[4787]: I0219 20:54:50.545166 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-hz9f6_b7deddaa-9e2a-4e95-8dce-fb6b70a0523e/manager/0.log" Feb 19 20:54:56 crc kubenswrapper[4787]: I0219 20:54:56.892502 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:54:56 crc kubenswrapper[4787]: E0219 20:54:56.893441 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:55:08 crc kubenswrapper[4787]: I0219 20:55:08.115806 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-htwzm_5fbae455-2136-403f-bda5-236a4de586da/control-plane-machine-set-operator/0.log" Feb 19 20:55:08 crc kubenswrapper[4787]: I0219 20:55:08.284754 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xplkr_8ff3a7ef-0a27-40ac-8a37-186f0d4f0939/kube-rbac-proxy/0.log" Feb 19 20:55:08 crc kubenswrapper[4787]: I0219 20:55:08.337339 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xplkr_8ff3a7ef-0a27-40ac-8a37-186f0d4f0939/machine-api-operator/0.log" Feb 19 20:55:11 crc kubenswrapper[4787]: I0219 20:55:11.892459 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:55:11 crc kubenswrapper[4787]: E0219 20:55:11.893226 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:55:20 crc kubenswrapper[4787]: I0219 20:55:20.669506 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-cktgf_43388b8a-9947-4159-bb64-7dd8745c2e47/cert-manager-controller/0.log" Feb 19 20:55:20 crc kubenswrapper[4787]: I0219 20:55:20.846927 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xnxh7_99b1a5a1-71a5-4caa-a647-68f6e7d96b96/cert-manager-cainjector/0.log" Feb 19 20:55:20 crc kubenswrapper[4787]: I0219 20:55:20.871889 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-68jqv_f45abe44-787c-4b28-b7d1-e5b5b3e7d0e2/cert-manager-webhook/0.log" Feb 19 20:55:22 crc kubenswrapper[4787]: I0219 20:55:22.904844 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:55:22 crc kubenswrapper[4787]: E0219 20:55:22.905475 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:55:33 crc kubenswrapper[4787]: I0219 20:55:33.585860 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-58bxh_a07ff2fe-5085-4c1a-8139-4a47329c88bc/nmstate-console-plugin/0.log" Feb 19 20:55:33 crc kubenswrapper[4787]: I0219 20:55:33.787997 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5qk4m_94194c14-c7cd-4b05-bda1-74ea911cd6cf/nmstate-handler/0.log" Feb 19 20:55:33 crc kubenswrapper[4787]: I0219 20:55:33.853251 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-h6fxn_b771e6b6-cd00-431a-84fb-970db07534bd/kube-rbac-proxy/0.log" Feb 19 20:55:33 crc kubenswrapper[4787]: I0219 20:55:33.939654 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-h6fxn_b771e6b6-cd00-431a-84fb-970db07534bd/nmstate-metrics/0.log" Feb 19 20:55:34 crc kubenswrapper[4787]: I0219 20:55:34.052535 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-bkxn5_5d632d9f-7b63-4c37-b21a-a8053bb0922e/nmstate-operator/0.log" Feb 19 20:55:34 crc kubenswrapper[4787]: I0219 20:55:34.174951 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-6sjw5_ace5ef3f-b2ed-4d41-a085-4c662e70061b/nmstate-webhook/0.log" Feb 19 20:55:34 crc kubenswrapper[4787]: I0219 20:55:34.893515 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:55:34 crc kubenswrapper[4787]: E0219 20:55:34.894274 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:55:46 crc kubenswrapper[4787]: I0219 20:55:46.894355 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:55:46 crc kubenswrapper[4787]: E0219 20:55:46.895588 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:55:47 crc kubenswrapper[4787]: I0219 20:55:47.937121 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55fc987df5-9spp8_aae80a85-0afc-42a9-817a-57570462dee1/kube-rbac-proxy/0.log" Feb 19 20:55:47 crc kubenswrapper[4787]: I0219 20:55:47.951409 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55fc987df5-9spp8_aae80a85-0afc-42a9-817a-57570462dee1/manager/1.log" Feb 19 20:55:48 crc kubenswrapper[4787]: I0219 20:55:48.132069 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55fc987df5-9spp8_aae80a85-0afc-42a9-817a-57570462dee1/manager/0.log" Feb 19 20:55:59 crc kubenswrapper[4787]: I0219 20:55:59.893123 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:55:59 crc kubenswrapper[4787]: E0219 20:55:59.894692 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:56:01 crc kubenswrapper[4787]: I0219 20:56:01.355009 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-h2xrw_b97e9051-e506-426c-9612-f504a878f9ed/prometheus-operator/0.log" Feb 19 20:56:01 crc kubenswrapper[4787]: I0219 20:56:01.526916 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8/prometheus-operator-admission-webhook/0.log" Feb 19 20:56:01 crc kubenswrapper[4787]: I0219 20:56:01.622872 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_41d8edf4-0b35-4651-a626-3a635bbf22a5/prometheus-operator-admission-webhook/0.log" Feb 19 20:56:01 crc kubenswrapper[4787]: I0219 20:56:01.770350 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9n572_963c18fc-03cd-46a4-9130-3908e897870e/operator/0.log" Feb 19 20:56:01 crc kubenswrapper[4787]: I0219 20:56:01.816540 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-mjrdv_191731f4-3080-4ae3-9aab-f44a30a33246/observability-ui-dashboards/0.log" Feb 19 20:56:01 crc kubenswrapper[4787]: I0219 20:56:01.986042 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tfckq_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57/perses-operator/0.log" Feb 19 20:56:11 crc kubenswrapper[4787]: I0219 20:56:11.893204 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:56:11 crc kubenswrapper[4787]: E0219 20:56:11.894277 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:56:16 crc kubenswrapper[4787]: I0219 20:56:16.782241 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-vddhp_c3fc79ee-4854-4886-a549-0baadec47ffd/cluster-logging-operator/0.log" Feb 19 20:56:16 crc kubenswrapper[4787]: I0219 20:56:16.969290 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-h749t_5b05acd3-34d5-4c6c-a559-2ec0d39761c9/collector/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.031231 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_46ee23a2-1b37-42e7-899f-5c1c70a6755b/loki-compactor/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.181157 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-dkzcx_2c6f8721-8336-47fa-b27a-6c897006b94e/loki-distributor/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.212442 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-65d54b8875-96vjl_ffe2a444-f47e-4193-b322-5943bf473b44/gateway/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.303804 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-65d54b8875-96vjl_ffe2a444-f47e-4193-b322-5943bf473b44/opa/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.414160 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-65d54b8875-tjbh7_47705ce6-ef81-47a2-bcd3-a10b7bb9317a/gateway/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.480279 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-65d54b8875-tjbh7_47705ce6-ef81-47a2-bcd3-a10b7bb9317a/opa/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.652116 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_47f3a5fe-a7c4-47d7-a8b3-a367f2eaccca/loki-index-gateway/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.763172 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_74b9f2e5-3b9f-4af9-990f-147a1c6f8943/loki-ingester/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.874031 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-nj9wt_ca0d4193-66a0-48c4-8932-8827eaac2c2b/loki-querier/0.log" Feb 19 20:56:17 crc kubenswrapper[4787]: I0219 20:56:17.956206 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-htw48_9d8ca7ab-f667-423c-926e-a9e2cfc10c1b/loki-query-frontend/0.log" Feb 19 20:56:23 crc kubenswrapper[4787]: I0219 20:56:23.892295 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:56:23 crc kubenswrapper[4787]: E0219 20:56:23.893043 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.205342 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-9sqnr_56aef487-656a-47b1-b3b4-d9fe6f62b1f4/kube-rbac-proxy/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.393735 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-9sqnr_56aef487-656a-47b1-b3b4-d9fe6f62b1f4/controller/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.456442 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-frr-files/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.648160 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-frr-files/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.665367 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-metrics/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.698292 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-reloader/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.720431 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-reloader/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.914845 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-reloader/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.929341 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-frr-files/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.965002 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-metrics/0.log" Feb 19 20:56:32 crc kubenswrapper[4787]: I0219 20:56:32.983234 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-metrics/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.126404 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-reloader/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.164525 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-frr-files/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.199740 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/cp-metrics/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.206032 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/controller/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.412443 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/frr-metrics/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.428178 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/kube-rbac-proxy/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.545485 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/frr/1.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.718310 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/kube-rbac-proxy-frr/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.725717 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/reloader/0.log" Feb 19 20:56:33 crc kubenswrapper[4787]: I0219 20:56:33.962066 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-dq2f5_d56f4bb8-5768-45e0-9cf9-6d759249fe69/frr-k8s-webhook-server/0.log" Feb 19 20:56:34 crc kubenswrapper[4787]: I0219 20:56:34.097652 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59989c9b4f-q9rqs_58f1bc3e-9217-48c3-80af-e4979969b991/manager/1.log" Feb 19 20:56:34 crc kubenswrapper[4787]: I0219 20:56:34.314554 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59989c9b4f-q9rqs_58f1bc3e-9217-48c3-80af-e4979969b991/manager/0.log" Feb 19 20:56:34 crc kubenswrapper[4787]: I0219 20:56:34.348948 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-69988d54ff-ndzss_a2428ab4-02d6-4400-820b-995a002fb38c/webhook-server/0.log" Feb 19 20:56:34 crc kubenswrapper[4787]: I0219 20:56:34.524045 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pn7bv_af7a16ca-ed17-45d6-aa9e-f2552dc92af7/kube-rbac-proxy/0.log" Feb 19 20:56:35 crc kubenswrapper[4787]: I0219 20:56:35.074109 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mvnk9_8eeee751-e7e9-412b-81cf-2bd7e702303d/frr/0.log" Feb 19 20:56:35 crc kubenswrapper[4787]: I0219 20:56:35.160623 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pn7bv_af7a16ca-ed17-45d6-aa9e-f2552dc92af7/speaker/0.log" Feb 19 20:56:36 crc kubenswrapper[4787]: I0219 20:56:36.892355 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:56:36 crc kubenswrapper[4787]: E0219 20:56:36.893231 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.518158 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/util/0.log" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.717838 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/pull/0.log" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.728414 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/util/0.log" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.739117 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/pull/0.log" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.891227 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/util/0.log" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.893258 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:56:47 crc kubenswrapper[4787]: E0219 20:56:47.893655 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.914315 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/pull/0.log" Feb 19 20:56:47 crc kubenswrapper[4787]: I0219 20:56:47.969833 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19dqjxp_02139a9b-2832-4a5f-8d79-e553400a8422/extract/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.095649 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/util/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.296285 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/util/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.311300 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/pull/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.319083 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/pull/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.567845 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/util/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.573765 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/pull/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.591600 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086gckh_b8b41187-185a-475c-84c9-5d64f4343eac/extract/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.736652 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/util/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.970978 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/pull/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.978113 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/pull/0.log" Feb 19 20:56:48 crc kubenswrapper[4787]: I0219 20:56:48.985597 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/util/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.185163 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/pull/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.218762 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/extract/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.287769 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b8ngj_82c7999d-e5ab-4203-8a11-784941050660/util/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.371963 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/extract-utilities/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.575662 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/extract-utilities/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.642456 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/extract-content/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.656601 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/extract-content/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.810565 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/extract-utilities/0.log" Feb 19 20:56:49 crc kubenswrapper[4787]: I0219 20:56:49.852875 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/extract-content/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.039981 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/extract-utilities/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.308910 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/extract-content/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.362158 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/extract-content/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.392902 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/extract-utilities/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.591451 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/extract-utilities/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.650855 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/extract-content/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.693865 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6mcds_634b3e3d-f43d-4d5c-996c-02c5277282ef/registry-server/0.log" Feb 19 20:56:50 crc kubenswrapper[4787]: I0219 20:56:50.900077 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/util/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.154217 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/pull/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.172329 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/pull/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.191491 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/util/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.438583 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/pull/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.490095 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/extract/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.500569 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e089895lb28_05bb6877-4f7a-44ef-9473-256081113294/util/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.564439 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jvnpx_1f861785-2aa2-4b3b-aca7-90a83d68bcd8/registry-server/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.685557 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/util/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.828732 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/pull/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.860664 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/pull/0.log" Feb 19 20:56:51 crc kubenswrapper[4787]: I0219 20:56:51.883059 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/util/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.054761 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/extract/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.055019 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/util/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.066197 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gdqt5_34c814cb-c6f7-48b1-8153-e532e5f71bc1/marketplace-operator/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.094774 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecasz6md_10a11b8e-3ef6-4880-8c24-b4d760a6241a/pull/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.213315 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/extract-utilities/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.373383 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/extract-utilities/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.390354 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/extract-content/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.392509 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/extract-content/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.600440 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/extract-content/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.644058 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/extract-utilities/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.753443 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/extract-utilities/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.901302 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gfvj_426998bc-15ae-476b-93e7-04f7591afce3/registry-server/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.917989 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/extract-utilities/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.931109 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/extract-content/0.log" Feb 19 20:56:52 crc kubenswrapper[4787]: I0219 20:56:52.989778 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/extract-content/0.log" Feb 19 20:56:53 crc kubenswrapper[4787]: I0219 20:56:53.111599 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/extract-utilities/0.log" Feb 19 20:56:53 crc kubenswrapper[4787]: I0219 20:56:53.130249 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/extract-content/0.log" Feb 19 20:56:53 crc kubenswrapper[4787]: I0219 20:56:53.880503 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pgqnv_5b15ff10-c8f6-43ca-9538-e781e30d1842/registry-server/0.log" Feb 19 20:56:58 crc kubenswrapper[4787]: I0219 20:56:58.891659 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:56:58 crc kubenswrapper[4787]: E0219 20:56:58.893349 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:57:06 crc kubenswrapper[4787]: I0219 20:57:06.075977 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-h2xrw_b97e9051-e506-426c-9612-f504a878f9ed/prometheus-operator/0.log" Feb 19 20:57:06 crc kubenswrapper[4787]: I0219 20:57:06.095632 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68bfc5d76d-k57kd_4e7bb705-7e46-4eb6-93c1-a124f8ca77c8/prometheus-operator-admission-webhook/0.log" Feb 19 20:57:06 crc kubenswrapper[4787]: I0219 20:57:06.117133 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-68bfc5d76d-mssvb_41d8edf4-0b35-4651-a626-3a635bbf22a5/prometheus-operator-admission-webhook/0.log" Feb 19 20:57:06 crc kubenswrapper[4787]: I0219 20:57:06.336459 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tfckq_a5709e38-dd1f-4a2a-ba8f-4da0055aaf57/perses-operator/0.log" Feb 19 20:57:06 crc kubenswrapper[4787]: I0219 20:57:06.358853 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9n572_963c18fc-03cd-46a4-9130-3908e897870e/operator/0.log" Feb 19 20:57:06 crc kubenswrapper[4787]: I0219 20:57:06.361072 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-mjrdv_191731f4-3080-4ae3-9aab-f44a30a33246/observability-ui-dashboards/0.log" Feb 19 20:57:09 crc kubenswrapper[4787]: I0219 20:57:09.892016 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:57:09 crc kubenswrapper[4787]: E0219 20:57:09.892993 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:57:20 crc kubenswrapper[4787]: I0219 20:57:20.524145 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55fc987df5-9spp8_aae80a85-0afc-42a9-817a-57570462dee1/manager/1.log" Feb 19 20:57:20 crc kubenswrapper[4787]: I0219 20:57:20.552744 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55fc987df5-9spp8_aae80a85-0afc-42a9-817a-57570462dee1/manager/0.log" Feb 19 20:57:20 crc kubenswrapper[4787]: I0219 20:57:20.605901 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55fc987df5-9spp8_aae80a85-0afc-42a9-817a-57570462dee1/kube-rbac-proxy/0.log" Feb 19 20:57:21 crc kubenswrapper[4787]: I0219 20:57:21.892920 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:57:21 crc kubenswrapper[4787]: E0219 20:57:21.893728 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:57:34 crc kubenswrapper[4787]: I0219 20:57:34.892880 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:57:34 crc kubenswrapper[4787]: E0219 20:57:34.893879 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:57:45 crc kubenswrapper[4787]: I0219 20:57:45.892330 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:57:45 crc kubenswrapper[4787]: E0219 20:57:45.893131 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.168559 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g2ft8"] Feb 19 20:57:51 crc kubenswrapper[4787]: E0219 20:57:51.169746 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c1776e-4925-4f16-97a7-c175efe21dd5" containerName="container-00" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.169764 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c1776e-4925-4f16-97a7-c175efe21dd5" containerName="container-00" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.170061 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c1776e-4925-4f16-97a7-c175efe21dd5" containerName="container-00" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.175943 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.220268 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-utilities\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.220359 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-catalog-content\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.220382 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxcs\" (UniqueName: \"kubernetes.io/projected/8a30f560-564f-4345-a3ef-6573b5b14416-kube-api-access-qnxcs\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.266506 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2ft8"] Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.322748 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-utilities\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.322853 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxcs\" (UniqueName: \"kubernetes.io/projected/8a30f560-564f-4345-a3ef-6573b5b14416-kube-api-access-qnxcs\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.322892 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-catalog-content\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.323387 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-utilities\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.323406 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-catalog-content\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.353459 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxcs\" (UniqueName: \"kubernetes.io/projected/8a30f560-564f-4345-a3ef-6573b5b14416-kube-api-access-qnxcs\") pod \"redhat-marketplace-g2ft8\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:51 crc kubenswrapper[4787]: I0219 20:57:51.506308 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:57:52 crc kubenswrapper[4787]: I0219 20:57:52.842571 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2ft8"] Feb 19 20:57:53 crc kubenswrapper[4787]: I0219 20:57:53.510454 4787 generic.go:334] "Generic (PLEG): container finished" podID="8a30f560-564f-4345-a3ef-6573b5b14416" containerID="29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123" exitCode=0 Feb 19 20:57:53 crc kubenswrapper[4787]: I0219 20:57:53.511042 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerDied","Data":"29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123"} Feb 19 20:57:53 crc kubenswrapper[4787]: I0219 20:57:53.511141 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerStarted","Data":"50229e056e5fb6a823bdd1bda25653fced844f1bc270ee1d9919babc24b4951c"} Feb 19 20:57:53 crc kubenswrapper[4787]: I0219 20:57:53.515065 4787 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 20:57:55 crc kubenswrapper[4787]: I0219 20:57:55.538568 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerStarted","Data":"981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24"} Feb 19 20:57:56 crc kubenswrapper[4787]: I0219 20:57:56.550407 4787 generic.go:334] "Generic (PLEG): container finished" podID="8a30f560-564f-4345-a3ef-6573b5b14416" containerID="981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24" exitCode=0 Feb 19 20:57:56 crc kubenswrapper[4787]: I0219 20:57:56.550473 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerDied","Data":"981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24"} Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.319575 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v2qwt"] Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.323193 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.335069 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v2qwt"] Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.485365 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-catalog-content\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.485417 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkvs4\" (UniqueName: \"kubernetes.io/projected/ee14fd8b-acad-446b-92e9-f0c982c2c36d-kube-api-access-lkvs4\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.485623 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-utilities\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.570854 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerStarted","Data":"d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad"} Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.587468 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-catalog-content\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.587528 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkvs4\" (UniqueName: \"kubernetes.io/projected/ee14fd8b-acad-446b-92e9-f0c982c2c36d-kube-api-access-lkvs4\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.587726 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-utilities\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.588473 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-utilities\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.588712 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-catalog-content\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.592954 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g2ft8" podStartSLOduration=3.167895483 podStartE2EDuration="6.592924547s" podCreationTimestamp="2026-02-19 20:57:51 +0000 UTC" firstStartedPulling="2026-02-19 20:57:53.513061027 +0000 UTC m=+5941.303726969" lastFinishedPulling="2026-02-19 20:57:56.938090091 +0000 UTC m=+5944.728756033" observedRunningTime="2026-02-19 20:57:57.58596871 +0000 UTC m=+5945.376634652" watchObservedRunningTime="2026-02-19 20:57:57.592924547 +0000 UTC m=+5945.383590489" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.614357 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkvs4\" (UniqueName: \"kubernetes.io/projected/ee14fd8b-acad-446b-92e9-f0c982c2c36d-kube-api-access-lkvs4\") pod \"redhat-operators-v2qwt\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.652258 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:57:57 crc kubenswrapper[4787]: I0219 20:57:57.893809 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:57:57 crc kubenswrapper[4787]: E0219 20:57:57.894401 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:57:58 crc kubenswrapper[4787]: I0219 20:57:58.211814 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v2qwt"] Feb 19 20:57:58 crc kubenswrapper[4787]: W0219 20:57:58.215884 4787 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee14fd8b_acad_446b_92e9_f0c982c2c36d.slice/crio-e839250a3685605a93e8c2fa48ac0b8562a17e0114b8d02a2e63a04fac5f069b WatchSource:0}: Error finding container e839250a3685605a93e8c2fa48ac0b8562a17e0114b8d02a2e63a04fac5f069b: Status 404 returned error can't find the container with id e839250a3685605a93e8c2fa48ac0b8562a17e0114b8d02a2e63a04fac5f069b Feb 19 20:57:58 crc kubenswrapper[4787]: I0219 20:57:58.588317 4787 generic.go:334] "Generic (PLEG): container finished" podID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerID="88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd" exitCode=0 Feb 19 20:57:58 crc kubenswrapper[4787]: I0219 20:57:58.588974 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerDied","Data":"88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd"} Feb 19 20:57:58 crc kubenswrapper[4787]: I0219 20:57:58.589025 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerStarted","Data":"e839250a3685605a93e8c2fa48ac0b8562a17e0114b8d02a2e63a04fac5f069b"} Feb 19 20:57:59 crc kubenswrapper[4787]: I0219 20:57:59.600975 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerStarted","Data":"d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5"} Feb 19 20:58:01 crc kubenswrapper[4787]: I0219 20:58:01.507132 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:58:01 crc kubenswrapper[4787]: I0219 20:58:01.507190 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:58:01 crc kubenswrapper[4787]: I0219 20:58:01.585335 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:58:05 crc kubenswrapper[4787]: I0219 20:58:05.680217 4787 generic.go:334] "Generic (PLEG): container finished" podID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerID="d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5" exitCode=0 Feb 19 20:58:05 crc kubenswrapper[4787]: I0219 20:58:05.680304 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerDied","Data":"d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5"} Feb 19 20:58:06 crc kubenswrapper[4787]: I0219 20:58:06.693702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerStarted","Data":"2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2"} Feb 19 20:58:06 crc kubenswrapper[4787]: I0219 20:58:06.716630 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v2qwt" podStartSLOduration=2.190945138 podStartE2EDuration="9.716592153s" podCreationTimestamp="2026-02-19 20:57:57 +0000 UTC" firstStartedPulling="2026-02-19 20:57:58.59237491 +0000 UTC m=+5946.383040852" lastFinishedPulling="2026-02-19 20:58:06.118021925 +0000 UTC m=+5953.908687867" observedRunningTime="2026-02-19 20:58:06.709317176 +0000 UTC m=+5954.499983138" watchObservedRunningTime="2026-02-19 20:58:06.716592153 +0000 UTC m=+5954.507258095" Feb 19 20:58:07 crc kubenswrapper[4787]: I0219 20:58:07.652660 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:58:07 crc kubenswrapper[4787]: I0219 20:58:07.652716 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:58:08 crc kubenswrapper[4787]: I0219 20:58:08.708195 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v2qwt" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" probeResult="failure" output=< Feb 19 20:58:08 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:58:08 crc kubenswrapper[4787]: > Feb 19 20:58:08 crc kubenswrapper[4787]: I0219 20:58:08.892480 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:58:08 crc kubenswrapper[4787]: E0219 20:58:08.892798 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:58:11 crc kubenswrapper[4787]: I0219 20:58:11.562666 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:58:11 crc kubenswrapper[4787]: I0219 20:58:11.624044 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2ft8"] Feb 19 20:58:11 crc kubenswrapper[4787]: I0219 20:58:11.742679 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g2ft8" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="registry-server" containerID="cri-o://d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad" gracePeriod=2 Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.367470 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.424487 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnxcs\" (UniqueName: \"kubernetes.io/projected/8a30f560-564f-4345-a3ef-6573b5b14416-kube-api-access-qnxcs\") pod \"8a30f560-564f-4345-a3ef-6573b5b14416\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.424585 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-catalog-content\") pod \"8a30f560-564f-4345-a3ef-6573b5b14416\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.424871 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-utilities\") pod \"8a30f560-564f-4345-a3ef-6573b5b14416\" (UID: \"8a30f560-564f-4345-a3ef-6573b5b14416\") " Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.429590 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-utilities" (OuterVolumeSpecName: "utilities") pod "8a30f560-564f-4345-a3ef-6573b5b14416" (UID: "8a30f560-564f-4345-a3ef-6573b5b14416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.475285 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a30f560-564f-4345-a3ef-6573b5b14416-kube-api-access-qnxcs" (OuterVolumeSpecName: "kube-api-access-qnxcs") pod "8a30f560-564f-4345-a3ef-6573b5b14416" (UID: "8a30f560-564f-4345-a3ef-6573b5b14416"). InnerVolumeSpecName "kube-api-access-qnxcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.511004 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a30f560-564f-4345-a3ef-6573b5b14416" (UID: "8a30f560-564f-4345-a3ef-6573b5b14416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.535122 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.535301 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnxcs\" (UniqueName: \"kubernetes.io/projected/8a30f560-564f-4345-a3ef-6573b5b14416-kube-api-access-qnxcs\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.535373 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a30f560-564f-4345-a3ef-6573b5b14416-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.758067 4787 generic.go:334] "Generic (PLEG): container finished" podID="8a30f560-564f-4345-a3ef-6573b5b14416" containerID="d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad" exitCode=0 Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.758123 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerDied","Data":"d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad"} Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.758167 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g2ft8" event={"ID":"8a30f560-564f-4345-a3ef-6573b5b14416","Type":"ContainerDied","Data":"50229e056e5fb6a823bdd1bda25653fced844f1bc270ee1d9919babc24b4951c"} Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.758187 4787 scope.go:117] "RemoveContainer" containerID="d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.758388 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g2ft8" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.799660 4787 scope.go:117] "RemoveContainer" containerID="981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.819259 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2ft8"] Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.839684 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g2ft8"] Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.865765 4787 scope.go:117] "RemoveContainer" containerID="29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.919254 4787 scope.go:117] "RemoveContainer" containerID="d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.919374 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" path="/var/lib/kubelet/pods/8a30f560-564f-4345-a3ef-6573b5b14416/volumes" Feb 19 20:58:12 crc kubenswrapper[4787]: E0219 20:58:12.920418 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad\": container with ID starting with d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad not found: ID does not exist" containerID="d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.920452 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad"} err="failed to get container status \"d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad\": rpc error: code = NotFound desc = could not find container \"d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad\": container with ID starting with d99cecf6468f88ab2f29f90d48b03a3e1d87507fef6f9004ff29c1782f3bd7ad not found: ID does not exist" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.920473 4787 scope.go:117] "RemoveContainer" containerID="981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24" Feb 19 20:58:12 crc kubenswrapper[4787]: E0219 20:58:12.921910 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24\": container with ID starting with 981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24 not found: ID does not exist" containerID="981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.921944 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24"} err="failed to get container status \"981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24\": rpc error: code = NotFound desc = could not find container \"981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24\": container with ID starting with 981500d6217e53adc3597f026d0ed4489b4c09a59c29fd244fc90f93c531ba24 not found: ID does not exist" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.921982 4787 scope.go:117] "RemoveContainer" containerID="29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123" Feb 19 20:58:12 crc kubenswrapper[4787]: E0219 20:58:12.923493 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123\": container with ID starting with 29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123 not found: ID does not exist" containerID="29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123" Feb 19 20:58:12 crc kubenswrapper[4787]: I0219 20:58:12.923516 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123"} err="failed to get container status \"29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123\": rpc error: code = NotFound desc = could not find container \"29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123\": container with ID starting with 29562024465acc11d8724708bdcfc61696a07f7bd7abcb2033b95cef4d1d3123 not found: ID does not exist" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.218428 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9phg"] Feb 19 20:58:15 crc kubenswrapper[4787]: E0219 20:58:15.218950 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="registry-server" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.218961 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="registry-server" Feb 19 20:58:15 crc kubenswrapper[4787]: E0219 20:58:15.219003 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="extract-utilities" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.219011 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="extract-utilities" Feb 19 20:58:15 crc kubenswrapper[4787]: E0219 20:58:15.219029 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="extract-content" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.219036 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="extract-content" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.219265 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a30f560-564f-4345-a3ef-6573b5b14416" containerName="registry-server" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.220904 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.232807 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9phg"] Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.300165 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-catalog-content\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.300555 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqwc\" (UniqueName: \"kubernetes.io/projected/ab4bf41e-ee63-4794-8190-a253d4ece450-kube-api-access-bjqwc\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.300636 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-utilities\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.403128 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-catalog-content\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.403225 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqwc\" (UniqueName: \"kubernetes.io/projected/ab4bf41e-ee63-4794-8190-a253d4ece450-kube-api-access-bjqwc\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.403301 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-utilities\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.403813 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-utilities\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.403831 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-catalog-content\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.436391 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqwc\" (UniqueName: \"kubernetes.io/projected/ab4bf41e-ee63-4794-8190-a253d4ece450-kube-api-access-bjqwc\") pod \"certified-operators-l9phg\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:15 crc kubenswrapper[4787]: I0219 20:58:15.586866 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:16 crc kubenswrapper[4787]: I0219 20:58:16.097230 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9phg"] Feb 19 20:58:16 crc kubenswrapper[4787]: I0219 20:58:16.830067 4787 generic.go:334] "Generic (PLEG): container finished" podID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerID="9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113" exitCode=0 Feb 19 20:58:16 crc kubenswrapper[4787]: I0219 20:58:16.830187 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerDied","Data":"9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113"} Feb 19 20:58:16 crc kubenswrapper[4787]: I0219 20:58:16.831243 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerStarted","Data":"ceecf8491b0e770e1c31f25a072a3153267ca35f012a1980f27e68f96490bafa"} Feb 19 20:58:17 crc kubenswrapper[4787]: I0219 20:58:17.843466 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerStarted","Data":"4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194"} Feb 19 20:58:18 crc kubenswrapper[4787]: I0219 20:58:18.704275 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v2qwt" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" probeResult="failure" output=< Feb 19 20:58:18 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:58:18 crc kubenswrapper[4787]: > Feb 19 20:58:18 crc kubenswrapper[4787]: I0219 20:58:18.855272 4787 generic.go:334] "Generic (PLEG): container finished" podID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerID="4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194" exitCode=0 Feb 19 20:58:18 crc kubenswrapper[4787]: I0219 20:58:18.855324 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerDied","Data":"4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194"} Feb 19 20:58:19 crc kubenswrapper[4787]: I0219 20:58:19.869147 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerStarted","Data":"2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c"} Feb 19 20:58:19 crc kubenswrapper[4787]: I0219 20:58:19.905476 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9phg" podStartSLOduration=2.41050614 podStartE2EDuration="4.905458963s" podCreationTimestamp="2026-02-19 20:58:15 +0000 UTC" firstStartedPulling="2026-02-19 20:58:16.832430594 +0000 UTC m=+5964.623096536" lastFinishedPulling="2026-02-19 20:58:19.327383417 +0000 UTC m=+5967.118049359" observedRunningTime="2026-02-19 20:58:19.892710531 +0000 UTC m=+5967.683376473" watchObservedRunningTime="2026-02-19 20:58:19.905458963 +0000 UTC m=+5967.696124905" Feb 19 20:58:23 crc kubenswrapper[4787]: I0219 20:58:23.893951 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:58:23 crc kubenswrapper[4787]: E0219 20:58:23.894827 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:58:25 crc kubenswrapper[4787]: I0219 20:58:25.587150 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:25 crc kubenswrapper[4787]: I0219 20:58:25.588892 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:25 crc kubenswrapper[4787]: I0219 20:58:25.640057 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:26 crc kubenswrapper[4787]: I0219 20:58:26.011083 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:26 crc kubenswrapper[4787]: I0219 20:58:26.094396 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9phg"] Feb 19 20:58:27 crc kubenswrapper[4787]: I0219 20:58:27.960978 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9phg" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="registry-server" containerID="cri-o://2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c" gracePeriod=2 Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.511153 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.644175 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-utilities\") pod \"ab4bf41e-ee63-4794-8190-a253d4ece450\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.644367 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjqwc\" (UniqueName: \"kubernetes.io/projected/ab4bf41e-ee63-4794-8190-a253d4ece450-kube-api-access-bjqwc\") pod \"ab4bf41e-ee63-4794-8190-a253d4ece450\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.644699 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-catalog-content\") pod \"ab4bf41e-ee63-4794-8190-a253d4ece450\" (UID: \"ab4bf41e-ee63-4794-8190-a253d4ece450\") " Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.645269 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-utilities" (OuterVolumeSpecName: "utilities") pod "ab4bf41e-ee63-4794-8190-a253d4ece450" (UID: "ab4bf41e-ee63-4794-8190-a253d4ece450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.654013 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4bf41e-ee63-4794-8190-a253d4ece450-kube-api-access-bjqwc" (OuterVolumeSpecName: "kube-api-access-bjqwc") pod "ab4bf41e-ee63-4794-8190-a253d4ece450" (UID: "ab4bf41e-ee63-4794-8190-a253d4ece450"). InnerVolumeSpecName "kube-api-access-bjqwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.708166 4787 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v2qwt" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" probeResult="failure" output=< Feb 19 20:58:28 crc kubenswrapper[4787]: timeout: failed to connect service ":50051" within 1s Feb 19 20:58:28 crc kubenswrapper[4787]: > Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.718867 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab4bf41e-ee63-4794-8190-a253d4ece450" (UID: "ab4bf41e-ee63-4794-8190-a253d4ece450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.748745 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.748798 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4bf41e-ee63-4794-8190-a253d4ece450-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.748809 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjqwc\" (UniqueName: \"kubernetes.io/projected/ab4bf41e-ee63-4794-8190-a253d4ece450-kube-api-access-bjqwc\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.976176 4787 generic.go:334] "Generic (PLEG): container finished" podID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerID="2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c" exitCode=0 Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.976232 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerDied","Data":"2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c"} Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.976242 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9phg" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.976269 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9phg" event={"ID":"ab4bf41e-ee63-4794-8190-a253d4ece450","Type":"ContainerDied","Data":"ceecf8491b0e770e1c31f25a072a3153267ca35f012a1980f27e68f96490bafa"} Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.976292 4787 scope.go:117] "RemoveContainer" containerID="2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c" Feb 19 20:58:28 crc kubenswrapper[4787]: I0219 20:58:28.998795 4787 scope.go:117] "RemoveContainer" containerID="4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.015278 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9phg"] Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.019318 4787 scope.go:117] "RemoveContainer" containerID="9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.039123 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9phg"] Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.075859 4787 scope.go:117] "RemoveContainer" containerID="2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c" Feb 19 20:58:29 crc kubenswrapper[4787]: E0219 20:58:29.076274 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c\": container with ID starting with 2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c not found: ID does not exist" containerID="2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.076318 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c"} err="failed to get container status \"2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c\": rpc error: code = NotFound desc = could not find container \"2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c\": container with ID starting with 2184085002eb5088df98056047c3425a645f1826e4a1155d4615b94f90ba4f6c not found: ID does not exist" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.076346 4787 scope.go:117] "RemoveContainer" containerID="4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194" Feb 19 20:58:29 crc kubenswrapper[4787]: E0219 20:58:29.076735 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194\": container with ID starting with 4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194 not found: ID does not exist" containerID="4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.076774 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194"} err="failed to get container status \"4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194\": rpc error: code = NotFound desc = could not find container \"4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194\": container with ID starting with 4b4bda8224c64894c9e817437450be8bb8f61b2ad1226239b64c8143bc414194 not found: ID does not exist" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.076803 4787 scope.go:117] "RemoveContainer" containerID="9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113" Feb 19 20:58:29 crc kubenswrapper[4787]: E0219 20:58:29.077117 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113\": container with ID starting with 9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113 not found: ID does not exist" containerID="9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113" Feb 19 20:58:29 crc kubenswrapper[4787]: I0219 20:58:29.077150 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113"} err="failed to get container status \"9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113\": rpc error: code = NotFound desc = could not find container \"9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113\": container with ID starting with 9117fe0813d0437346a7cce1567e69b0042f538e7ce977b657bd868114274113 not found: ID does not exist" Feb 19 20:58:29 crc kubenswrapper[4787]: E0219 20:58:29.110010 4787 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab4bf41e_ee63_4794_8190_a253d4ece450.slice/crio-ceecf8491b0e770e1c31f25a072a3153267ca35f012a1980f27e68f96490bafa\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab4bf41e_ee63_4794_8190_a253d4ece450.slice\": RecentStats: unable to find data in memory cache]" Feb 19 20:58:30 crc kubenswrapper[4787]: I0219 20:58:30.910725 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" path="/var/lib/kubelet/pods/ab4bf41e-ee63-4794-8190-a253d4ece450/volumes" Feb 19 20:58:36 crc kubenswrapper[4787]: I0219 20:58:36.894111 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:58:36 crc kubenswrapper[4787]: E0219 20:58:36.896010 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 20:58:37 crc kubenswrapper[4787]: I0219 20:58:37.723841 4787 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:58:37 crc kubenswrapper[4787]: I0219 20:58:37.778305 4787 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:58:37 crc kubenswrapper[4787]: I0219 20:58:37.971301 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v2qwt"] Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.125031 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v2qwt" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" containerID="cri-o://2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2" gracePeriod=2 Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.685414 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.816125 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-catalog-content\") pod \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.816163 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkvs4\" (UniqueName: \"kubernetes.io/projected/ee14fd8b-acad-446b-92e9-f0c982c2c36d-kube-api-access-lkvs4\") pod \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.816401 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-utilities\") pod \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\" (UID: \"ee14fd8b-acad-446b-92e9-f0c982c2c36d\") " Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.817422 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-utilities" (OuterVolumeSpecName: "utilities") pod "ee14fd8b-acad-446b-92e9-f0c982c2c36d" (UID: "ee14fd8b-acad-446b-92e9-f0c982c2c36d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.823622 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee14fd8b-acad-446b-92e9-f0c982c2c36d-kube-api-access-lkvs4" (OuterVolumeSpecName: "kube-api-access-lkvs4") pod "ee14fd8b-acad-446b-92e9-f0c982c2c36d" (UID: "ee14fd8b-acad-446b-92e9-f0c982c2c36d"). InnerVolumeSpecName "kube-api-access-lkvs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.919345 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkvs4\" (UniqueName: \"kubernetes.io/projected/ee14fd8b-acad-446b-92e9-f0c982c2c36d-kube-api-access-lkvs4\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.919384 4787 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:39 crc kubenswrapper[4787]: I0219 20:58:39.935294 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee14fd8b-acad-446b-92e9-f0c982c2c36d" (UID: "ee14fd8b-acad-446b-92e9-f0c982c2c36d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.021815 4787 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee14fd8b-acad-446b-92e9-f0c982c2c36d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.150665 4787 generic.go:334] "Generic (PLEG): container finished" podID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerID="2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2" exitCode=0 Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.150720 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerDied","Data":"2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2"} Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.150756 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v2qwt" event={"ID":"ee14fd8b-acad-446b-92e9-f0c982c2c36d","Type":"ContainerDied","Data":"e839250a3685605a93e8c2fa48ac0b8562a17e0114b8d02a2e63a04fac5f069b"} Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.150778 4787 scope.go:117] "RemoveContainer" containerID="2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.150972 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v2qwt" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.173351 4787 scope.go:117] "RemoveContainer" containerID="d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.201340 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v2qwt"] Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.210274 4787 scope.go:117] "RemoveContainer" containerID="88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.220499 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v2qwt"] Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.272364 4787 scope.go:117] "RemoveContainer" containerID="2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2" Feb 19 20:58:40 crc kubenswrapper[4787]: E0219 20:58:40.273788 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2\": container with ID starting with 2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2 not found: ID does not exist" containerID="2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.273831 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2"} err="failed to get container status \"2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2\": rpc error: code = NotFound desc = could not find container \"2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2\": container with ID starting with 2675cb790cfb04362fb0a768d3cc1f264c8b041099d7931165796945cbdc7db2 not found: ID does not exist" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.273857 4787 scope.go:117] "RemoveContainer" containerID="d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5" Feb 19 20:58:40 crc kubenswrapper[4787]: E0219 20:58:40.274109 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5\": container with ID starting with d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5 not found: ID does not exist" containerID="d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.274141 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5"} err="failed to get container status \"d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5\": rpc error: code = NotFound desc = could not find container \"d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5\": container with ID starting with d6a430ce8ea641937de7cf3c2896ee8cf813bab5663fad983938fa23bb0202e5 not found: ID does not exist" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.274162 4787 scope.go:117] "RemoveContainer" containerID="88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd" Feb 19 20:58:40 crc kubenswrapper[4787]: E0219 20:58:40.274605 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd\": container with ID starting with 88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd not found: ID does not exist" containerID="88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.274658 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd"} err="failed to get container status \"88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd\": rpc error: code = NotFound desc = could not find container \"88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd\": container with ID starting with 88605f515021d55ffd1f0fc29af2679aa1e9793f0e0631b3313a6f9584fecbdd not found: ID does not exist" Feb 19 20:58:40 crc kubenswrapper[4787]: I0219 20:58:40.906376 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" path="/var/lib/kubelet/pods/ee14fd8b-acad-446b-92e9-f0c982c2c36d/volumes" Feb 19 20:58:51 crc kubenswrapper[4787]: I0219 20:58:51.893279 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 20:58:52 crc kubenswrapper[4787]: I0219 20:58:52.290702 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"a85e6787a85a5e1f56c567e9c52d54530c8ecf16aac45c2922f4484723ff1dbd"} Feb 19 20:59:33 crc kubenswrapper[4787]: I0219 20:59:33.763914 4787 generic.go:334] "Generic (PLEG): container finished" podID="9deab114-1731-496c-8d48-75e6c998fcfe" containerID="5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df" exitCode=0 Feb 19 20:59:33 crc kubenswrapper[4787]: I0219 20:59:33.764098 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7mnxb/must-gather-2wq44" event={"ID":"9deab114-1731-496c-8d48-75e6c998fcfe","Type":"ContainerDied","Data":"5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df"} Feb 19 20:59:33 crc kubenswrapper[4787]: I0219 20:59:33.765227 4787 scope.go:117] "RemoveContainer" containerID="5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df" Feb 19 20:59:34 crc kubenswrapper[4787]: I0219 20:59:34.787507 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7mnxb_must-gather-2wq44_9deab114-1731-496c-8d48-75e6c998fcfe/gather/0.log" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.020261 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7mnxb/must-gather-2wq44"] Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.022220 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7mnxb/must-gather-2wq44" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="copy" containerID="cri-o://376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab" gracePeriod=2 Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.032935 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7mnxb/must-gather-2wq44"] Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.526232 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7mnxb_must-gather-2wq44_9deab114-1731-496c-8d48-75e6c998fcfe/copy/0.log" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.526969 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.674015 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b5lb\" (UniqueName: \"kubernetes.io/projected/9deab114-1731-496c-8d48-75e6c998fcfe-kube-api-access-2b5lb\") pod \"9deab114-1731-496c-8d48-75e6c998fcfe\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.674221 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9deab114-1731-496c-8d48-75e6c998fcfe-must-gather-output\") pod \"9deab114-1731-496c-8d48-75e6c998fcfe\" (UID: \"9deab114-1731-496c-8d48-75e6c998fcfe\") " Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.684513 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9deab114-1731-496c-8d48-75e6c998fcfe-kube-api-access-2b5lb" (OuterVolumeSpecName: "kube-api-access-2b5lb") pod "9deab114-1731-496c-8d48-75e6c998fcfe" (UID: "9deab114-1731-496c-8d48-75e6c998fcfe"). InnerVolumeSpecName "kube-api-access-2b5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.776940 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b5lb\" (UniqueName: \"kubernetes.io/projected/9deab114-1731-496c-8d48-75e6c998fcfe-kube-api-access-2b5lb\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.883696 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9deab114-1731-496c-8d48-75e6c998fcfe-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9deab114-1731-496c-8d48-75e6c998fcfe" (UID: "9deab114-1731-496c-8d48-75e6c998fcfe"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.938509 4787 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7mnxb_must-gather-2wq44_9deab114-1731-496c-8d48-75e6c998fcfe/copy/0.log" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.939228 4787 generic.go:334] "Generic (PLEG): container finished" podID="9deab114-1731-496c-8d48-75e6c998fcfe" containerID="376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab" exitCode=143 Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.939282 4787 scope.go:117] "RemoveContainer" containerID="376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.939301 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7mnxb/must-gather-2wq44" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.975195 4787 scope.go:117] "RemoveContainer" containerID="5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df" Feb 19 20:59:45 crc kubenswrapper[4787]: I0219 20:59:45.982109 4787 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9deab114-1731-496c-8d48-75e6c998fcfe-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 20:59:46 crc kubenswrapper[4787]: I0219 20:59:46.016219 4787 scope.go:117] "RemoveContainer" containerID="376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab" Feb 19 20:59:46 crc kubenswrapper[4787]: E0219 20:59:46.017150 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab\": container with ID starting with 376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab not found: ID does not exist" containerID="376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab" Feb 19 20:59:46 crc kubenswrapper[4787]: I0219 20:59:46.017189 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab"} err="failed to get container status \"376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab\": rpc error: code = NotFound desc = could not find container \"376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab\": container with ID starting with 376302fb96d4719a3a8e9d7029be7c2f1466529f440ba36288e4cfab7f1d44ab not found: ID does not exist" Feb 19 20:59:46 crc kubenswrapper[4787]: I0219 20:59:46.017217 4787 scope.go:117] "RemoveContainer" containerID="5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df" Feb 19 20:59:46 crc kubenswrapper[4787]: E0219 20:59:46.017688 4787 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df\": container with ID starting with 5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df not found: ID does not exist" containerID="5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df" Feb 19 20:59:46 crc kubenswrapper[4787]: I0219 20:59:46.017721 4787 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df"} err="failed to get container status \"5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df\": rpc error: code = NotFound desc = could not find container \"5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df\": container with ID starting with 5aaec996937a6cf94c03402ab12069a5bc0b57f454d6319f951d8ab27f2e33df not found: ID does not exist" Feb 19 20:59:46 crc kubenswrapper[4787]: I0219 20:59:46.902909 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" path="/var/lib/kubelet/pods/9deab114-1731-496c-8d48-75e6c998fcfe/volumes" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.176223 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w"] Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177467 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="extract-utilities" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177490 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="extract-utilities" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177512 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="extract-utilities" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177521 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="extract-utilities" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177543 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="gather" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177553 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="gather" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177566 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177574 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177587 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="registry-server" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177596 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="registry-server" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177635 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="extract-content" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177643 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="extract-content" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177663 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="extract-content" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177671 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="extract-content" Feb 19 21:00:00 crc kubenswrapper[4787]: E0219 21:00:00.177683 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="copy" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177689 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="copy" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177986 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="gather" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.177999 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="9deab114-1731-496c-8d48-75e6c998fcfe" containerName="copy" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.178018 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee14fd8b-acad-446b-92e9-f0c982c2c36d" containerName="registry-server" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.178030 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4bf41e-ee63-4794-8190-a253d4ece450" containerName="registry-server" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.179148 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.188051 4787 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.188085 4787 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.196789 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w"] Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.337585 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjv9r\" (UniqueName: \"kubernetes.io/projected/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-kube-api-access-kjv9r\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.337658 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-secret-volume\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.337758 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-config-volume\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.439636 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjv9r\" (UniqueName: \"kubernetes.io/projected/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-kube-api-access-kjv9r\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.439702 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-secret-volume\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.439786 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-config-volume\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.440831 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-config-volume\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.472696 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjv9r\" (UniqueName: \"kubernetes.io/projected/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-kube-api-access-kjv9r\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.477363 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-secret-volume\") pod \"collect-profiles-29525580-9tx5w\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:00 crc kubenswrapper[4787]: I0219 21:00:00.510657 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:01 crc kubenswrapper[4787]: I0219 21:00:01.027232 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w"] Feb 19 21:00:01 crc kubenswrapper[4787]: I0219 21:00:01.118325 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" event={"ID":"4e4add1b-fda9-4aec-8f32-a63f8aa2338b","Type":"ContainerStarted","Data":"079813de3649a33b429d3ca820a109eb46c9e7b8c6ed8fae9c16f8ffca5169fe"} Feb 19 21:00:02 crc kubenswrapper[4787]: I0219 21:00:02.132070 4787 generic.go:334] "Generic (PLEG): container finished" podID="4e4add1b-fda9-4aec-8f32-a63f8aa2338b" containerID="8a190e74d725a75204d3358d91625847fb179eb76f0892162c49b66beae43b86" exitCode=0 Feb 19 21:00:02 crc kubenswrapper[4787]: I0219 21:00:02.132272 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" event={"ID":"4e4add1b-fda9-4aec-8f32-a63f8aa2338b","Type":"ContainerDied","Data":"8a190e74d725a75204d3358d91625847fb179eb76f0892162c49b66beae43b86"} Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.539321 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.722475 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjv9r\" (UniqueName: \"kubernetes.io/projected/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-kube-api-access-kjv9r\") pod \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.722591 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-config-volume\") pod \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.722657 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-secret-volume\") pod \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\" (UID: \"4e4add1b-fda9-4aec-8f32-a63f8aa2338b\") " Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.723236 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e4add1b-fda9-4aec-8f32-a63f8aa2338b" (UID: "4e4add1b-fda9-4aec-8f32-a63f8aa2338b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.723532 4787 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.728308 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e4add1b-fda9-4aec-8f32-a63f8aa2338b" (UID: "4e4add1b-fda9-4aec-8f32-a63f8aa2338b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.735122 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-kube-api-access-kjv9r" (OuterVolumeSpecName: "kube-api-access-kjv9r") pod "4e4add1b-fda9-4aec-8f32-a63f8aa2338b" (UID: "4e4add1b-fda9-4aec-8f32-a63f8aa2338b"). InnerVolumeSpecName "kube-api-access-kjv9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.826482 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjv9r\" (UniqueName: \"kubernetes.io/projected/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-kube-api-access-kjv9r\") on node \"crc\" DevicePath \"\"" Feb 19 21:00:03 crc kubenswrapper[4787]: I0219 21:00:03.826524 4787 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4add1b-fda9-4aec-8f32-a63f8aa2338b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 21:00:04 crc kubenswrapper[4787]: I0219 21:00:04.162674 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" event={"ID":"4e4add1b-fda9-4aec-8f32-a63f8aa2338b","Type":"ContainerDied","Data":"079813de3649a33b429d3ca820a109eb46c9e7b8c6ed8fae9c16f8ffca5169fe"} Feb 19 21:00:04 crc kubenswrapper[4787]: I0219 21:00:04.162718 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079813de3649a33b429d3ca820a109eb46c9e7b8c6ed8fae9c16f8ffca5169fe" Feb 19 21:00:04 crc kubenswrapper[4787]: I0219 21:00:04.162786 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525580-9tx5w" Feb 19 21:00:04 crc kubenswrapper[4787]: I0219 21:00:04.610512 4787 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm"] Feb 19 21:00:04 crc kubenswrapper[4787]: I0219 21:00:04.622240 4787 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525535-7b6cm"] Feb 19 21:00:04 crc kubenswrapper[4787]: I0219 21:00:04.906028 4787 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623075c5-5d08-4036-9e25-24b66d9c353a" path="/var/lib/kubelet/pods/623075c5-5d08-4036-9e25-24b66d9c353a/volumes" Feb 19 21:00:11 crc kubenswrapper[4787]: I0219 21:00:11.530128 4787 scope.go:117] "RemoveContainer" containerID="5ce3d3fc2422e19f6ea0d8949a9aa613f3f54c6a2e06ff3cf84e08e3adba2dde" Feb 19 21:00:11 crc kubenswrapper[4787]: I0219 21:00:11.563282 4787 scope.go:117] "RemoveContainer" containerID="d496884bfe7f7b7c58685a00479ff90dd0e5aa8dfbc8abfa70c4dd3c498a22ce" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.177877 4787 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525581-5qlgq"] Feb 19 21:01:00 crc kubenswrapper[4787]: E0219 21:01:00.179146 4787 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4add1b-fda9-4aec-8f32-a63f8aa2338b" containerName="collect-profiles" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.179281 4787 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4add1b-fda9-4aec-8f32-a63f8aa2338b" containerName="collect-profiles" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.179560 4787 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4add1b-fda9-4aec-8f32-a63f8aa2338b" containerName="collect-profiles" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.180592 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.209046 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525581-5qlgq"] Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.214167 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-config-data\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.214259 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-combined-ca-bundle\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.214349 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-fernet-keys\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.214385 4787 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjmb\" (UniqueName: \"kubernetes.io/projected/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-kube-api-access-hxjmb\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.316928 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-fernet-keys\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.316986 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjmb\" (UniqueName: \"kubernetes.io/projected/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-kube-api-access-hxjmb\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.317092 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-config-data\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.317162 4787 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-combined-ca-bundle\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.324229 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-combined-ca-bundle\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.324282 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-fernet-keys\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.325341 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-config-data\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.348178 4787 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjmb\" (UniqueName: \"kubernetes.io/projected/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-kube-api-access-hxjmb\") pod \"keystone-cron-29525581-5qlgq\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:00 crc kubenswrapper[4787]: I0219 21:01:00.503092 4787 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:01 crc kubenswrapper[4787]: I0219 21:01:01.002422 4787 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525581-5qlgq"] Feb 19 21:01:01 crc kubenswrapper[4787]: I0219 21:01:01.930332 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-5qlgq" event={"ID":"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8","Type":"ContainerStarted","Data":"2af7d3442d3494b9a8828f8a316dad739c7a67d966c9325ce06cbb92ca63a30c"} Feb 19 21:01:01 crc kubenswrapper[4787]: I0219 21:01:01.930946 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-5qlgq" event={"ID":"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8","Type":"ContainerStarted","Data":"ca8140c7b57084ad217254073a7041e8714890b462286a7d6b44e43f8bc514bc"} Feb 19 21:01:01 crc kubenswrapper[4787]: I0219 21:01:01.956572 4787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525581-5qlgq" podStartSLOduration=1.956551956 podStartE2EDuration="1.956551956s" podCreationTimestamp="2026-02-19 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 21:01:01.945263886 +0000 UTC m=+6129.735929838" watchObservedRunningTime="2026-02-19 21:01:01.956551956 +0000 UTC m=+6129.747217898" Feb 19 21:01:05 crc kubenswrapper[4787]: I0219 21:01:05.025859 4787 generic.go:334] "Generic (PLEG): container finished" podID="158fd2d8-5790-4a30-bc0a-e3c9374f6ff8" containerID="2af7d3442d3494b9a8828f8a316dad739c7a67d966c9325ce06cbb92ca63a30c" exitCode=0 Feb 19 21:01:05 crc kubenswrapper[4787]: I0219 21:01:05.025916 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-5qlgq" event={"ID":"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8","Type":"ContainerDied","Data":"2af7d3442d3494b9a8828f8a316dad739c7a67d966c9325ce06cbb92ca63a30c"} Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.507454 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.697083 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-config-data\") pod \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.697659 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-fernet-keys\") pod \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.697741 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-combined-ca-bundle\") pod \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.698256 4787 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxjmb\" (UniqueName: \"kubernetes.io/projected/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-kube-api-access-hxjmb\") pod \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\" (UID: \"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8\") " Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.711223 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8" (UID: "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.722890 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-kube-api-access-hxjmb" (OuterVolumeSpecName: "kube-api-access-hxjmb") pod "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8" (UID: "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8"). InnerVolumeSpecName "kube-api-access-hxjmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.745745 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8" (UID: "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.772779 4787 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-config-data" (OuterVolumeSpecName: "config-data") pod "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8" (UID: "158fd2d8-5790-4a30-bc0a-e3c9374f6ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.802807 4787 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.802848 4787 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.802889 4787 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:06 crc kubenswrapper[4787]: I0219 21:01:06.802922 4787 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxjmb\" (UniqueName: \"kubernetes.io/projected/158fd2d8-5790-4a30-bc0a-e3c9374f6ff8-kube-api-access-hxjmb\") on node \"crc\" DevicePath \"\"" Feb 19 21:01:07 crc kubenswrapper[4787]: I0219 21:01:07.051397 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525581-5qlgq" event={"ID":"158fd2d8-5790-4a30-bc0a-e3c9374f6ff8","Type":"ContainerDied","Data":"ca8140c7b57084ad217254073a7041e8714890b462286a7d6b44e43f8bc514bc"} Feb 19 21:01:07 crc kubenswrapper[4787]: I0219 21:01:07.051468 4787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca8140c7b57084ad217254073a7041e8714890b462286a7d6b44e43f8bc514bc" Feb 19 21:01:07 crc kubenswrapper[4787]: I0219 21:01:07.051472 4787 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525581-5qlgq" Feb 19 21:01:09 crc kubenswrapper[4787]: I0219 21:01:09.262889 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:01:09 crc kubenswrapper[4787]: I0219 21:01:09.263194 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:01:39 crc kubenswrapper[4787]: I0219 21:01:39.263432 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:01:39 crc kubenswrapper[4787]: I0219 21:01:39.264140 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.262900 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.263498 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.263550 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.264370 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a85e6787a85a5e1f56c567e9c52d54530c8ecf16aac45c2922f4484723ff1dbd"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.264670 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://a85e6787a85a5e1f56c567e9c52d54530c8ecf16aac45c2922f4484723ff1dbd" gracePeriod=600 Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.838844 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="a85e6787a85a5e1f56c567e9c52d54530c8ecf16aac45c2922f4484723ff1dbd" exitCode=0 Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.838935 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"a85e6787a85a5e1f56c567e9c52d54530c8ecf16aac45c2922f4484723ff1dbd"} Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.839137 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerStarted","Data":"232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91"} Feb 19 21:02:09 crc kubenswrapper[4787]: I0219 21:02:09.839163 4787 scope.go:117] "RemoveContainer" containerID="0993fb169021264e583ea3a9dc0d00e7500db7dc884c3cdfa993cd2fe96f5364" Feb 19 21:04:09 crc kubenswrapper[4787]: I0219 21:04:09.263925 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:04:09 crc kubenswrapper[4787]: I0219 21:04:09.264536 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:04:39 crc kubenswrapper[4787]: I0219 21:04:39.263969 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:04:39 crc kubenswrapper[4787]: I0219 21:04:39.264580 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:05:09 crc kubenswrapper[4787]: I0219 21:05:09.263565 4787 patch_prober.go:28] interesting pod/machine-config-daemon-wlszq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 21:05:09 crc kubenswrapper[4787]: I0219 21:05:09.264191 4787 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 21:05:09 crc kubenswrapper[4787]: I0219 21:05:09.264234 4787 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" Feb 19 21:05:09 crc kubenswrapper[4787]: I0219 21:05:09.264885 4787 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91"} pod="openshift-machine-config-operator/machine-config-daemon-wlszq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 21:05:09 crc kubenswrapper[4787]: I0219 21:05:09.264957 4787 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerName="machine-config-daemon" containerID="cri-o://232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91" gracePeriod=600 Feb 19 21:05:09 crc kubenswrapper[4787]: E0219 21:05:09.388735 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 21:05:10 crc kubenswrapper[4787]: I0219 21:05:10.167400 4787 generic.go:334] "Generic (PLEG): container finished" podID="00bdf088-5e51-4d51-9cb1-8e590898482c" containerID="232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91" exitCode=0 Feb 19 21:05:10 crc kubenswrapper[4787]: I0219 21:05:10.167451 4787 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" event={"ID":"00bdf088-5e51-4d51-9cb1-8e590898482c","Type":"ContainerDied","Data":"232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91"} Feb 19 21:05:10 crc kubenswrapper[4787]: I0219 21:05:10.167833 4787 scope.go:117] "RemoveContainer" containerID="a85e6787a85a5e1f56c567e9c52d54530c8ecf16aac45c2922f4484723ff1dbd" Feb 19 21:05:10 crc kubenswrapper[4787]: I0219 21:05:10.168821 4787 scope.go:117] "RemoveContainer" containerID="232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91" Feb 19 21:05:10 crc kubenswrapper[4787]: E0219 21:05:10.169313 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 21:05:20 crc kubenswrapper[4787]: I0219 21:05:20.893183 4787 scope.go:117] "RemoveContainer" containerID="232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91" Feb 19 21:05:20 crc kubenswrapper[4787]: E0219 21:05:20.894081 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c" Feb 19 21:05:33 crc kubenswrapper[4787]: I0219 21:05:33.893422 4787 scope.go:117] "RemoveContainer" containerID="232436ec0ee6a460abae705b727a75308585c03616dbcb1ce75218c6f3f21a91" Feb 19 21:05:33 crc kubenswrapper[4787]: E0219 21:05:33.894232 4787 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wlszq_openshift-machine-config-operator(00bdf088-5e51-4d51-9cb1-8e590898482c)\"" pod="openshift-machine-config-operator/machine-config-daemon-wlszq" podUID="00bdf088-5e51-4d51-9cb1-8e590898482c"